Logo

With AI at their fingertips, are students still learning?

The ubiquity of artificial intelligence may be affecting students’ cognitive development. Gareth Morris and Bamidele Akinwolemiwa consider how to address this

,

19 Jun 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A red toy robot sits on a pile of books
image credit: iStock/breakermaximus.

Created in partnership with

Created in partnership with

Nottingham Ningbo

You may also like

Collaborating with artificial intelligence? Use your metacognitive skills
4 minute read
Human brain and metacognition

Popular resources

The arrival of generative AI (GenAI) has completely shifted the discussion about humanity and technology, with the stories of tomorrow emerging today. But what does this mean for higher education? Some suggest that AI may become embedded in all operational areas of institutions. It’s certainly not impossible. Others, like Sal Kahn, suggest that the playing field can be not only levelled but potentially improved through Socratic AI uptake. On a broader social and wider institutional level the benefits are attractive, but the challenges are also significant.

Pedagogical innovations 

Students are now turning to GenAI tools to gain insight into questions they previously would have asked in class and to enhance, clarify or sometimes extend their notes. One challenge presented by this is that some students over-rely on these tools, which may affect their learning process. Even training on the ethical and appropriate use of an AI tool may not address their use of it in itself. 

More concerning is the influence that these tools will have on the student’s cognitive engagement with the subject matter. In some cases, students will no longer make an effort to learn certain skills and instead leave the thinking and writing process up to technology. We therefore need to implement creative and bespoke pedagogical approaches that anticipate students’ use of AI. It’s imperative that this informs teaching and assessment approaches, development of teaching and learning materials and the modification of learning outcomes for specific subject areas. 

Essentially, the skills expected of students in a world where these tools will become ubiquitous has to be critically evaluated both in the classroom and at an institutional level. We must develop methods that prevent students from delegating their cognitive engagement and the development of their critical thinking skills to technology.

The fallible coursework essay

Let’s take a practical analytical assessment norm as a starting point. The traditional coursework essay — a stalwart of university courses around the globe due to the capacity it has to assess intended course learning outcomes — is potentially fatally compromised. 

Traditionally, students write an essay by planning, drafting and refining their work. To do so, they might access and reflect on the suggestions of online writing tools such as Microsoft Word’s inbuilt review features, external websites such as Grammarly, peer feedback and formative input from course tutors. Academics have raised concerns over academic integrity and honesty over the past couple of decades, but plagiarism prevention service Turnitin helped to reassure many.

However, the new prospect of a more powerful, free-prompt-inspired super author, available to students at the click of a button, is looming large. What can course designers, programme managers and higher education leaders do?

The assessment solutions

Definitively identifying AI-written work, with no markers apparent and no previous draft data to refer back to, is extremely difficult. 

AI identifiers such as Quillbot are useful, but not infallible. At times, they pick up on machine-translated work, but they also can miss it. False positives are also possible, with human-written texts being misidentified as AI-generated. 

One solution is a balance-of-probabilities approach tied into institutional academic integrity, honesty and authorship policies. Giving students appropriate writing skills training, clearly outlining expectations and the outcomes if they’re not met, will work in many cases. But let’s be realistic – not all. 

Another alternative is to change the assessment parameters. Portfolio-style works take on greater weighting. The emphasis is placed on the process and not simply the end product. If the process is too protracted and fallible because of time, extended exam style or practical experiment conditions could be applied to assess key skills. 

Taken a step further, institutional resources, such as writing centres and centrally managed IT applications, can also be used that level the playing field even more. In addition, adaptions can be made in assessment cases where extra time or support is required.

Of course, educators need to ensure that AI is not holding the pen, especially for module-learning outcomes designed to inform future employers that students have mastered these skills and competencies. Yet students are facing a future in which technologies only being dreamed of today are the backbone of the workforce. We need to design courses specifically for this purpose and measure how well students can work in tandem with these resources. But for now, and for the majority of courses, promoting honesty and integrity is the way forward for a better relationship with AI.

Gareth Morris works at the Centre of English Language Education and Bamidele Akinwolemiwa is a researcher and graduate teaching assistant, both at the University of Nottingham Ningbo.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site