Logo

Shifting sands of academic integrity in the age of AI

Despite concerns about the use of generative AI, universities are beginning to understand how issues around academic integrity can be a learning opportunity for students and teachers alike

Turnitin's avatar
Turnitin
28 Oct 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
Turnitin
info
Sponsored by
Turnitin logo

Turnitin

Learn how Turnitin empowers educators with insights and sets students up for success

How learners and students use AI in their work has reached a crossroads. Generative AI tools such as ChatGPT are blurring the boundaries of academic integrity and transforming how students learn. But educators are struggling to build equitable policies that keep up with the pace of change. 

Delivering a keynote at the 2024 THE Digital Universities Arab World event in Cairo, Egypt, Aaron Yaverski, regional vice-president of EMEA at Turnitin, described the challenges universities face in developing policies around AI use since ChatGPT stormed onto the scene in 2022. 

According to research from Turnitin, 42 per cent of students are worried about being falsely accused of using AI in their assignments and almost half do not feel confident about their ability to prove their innocence if they are falsely accused. 

“When generative AI first came out, our main concern was, ‘would it take my students’ tests and do their homework?’, rather than how it could make a student better and help them with their research,” he said. A Tyton survey conducted before ChatGPT’s launch in 2022 found AI-based academic misconduct to be 10th on the list of faculty concerns. More recent surveys suggest that it is now the primary concern for educators. There is also a gap between what learners and educators consider appropriate use of AI tools.

According to the Tyton survey, educators argue that brainstorming is the most constructive use of AI, while almost two-thirds of students think that writing some or all of their assignments is an acceptable way of using it. One of the challenges of establishing sensible use policies on AI for students is that 35 per cent of educators are yet to use a large language model such as ChatGPT. “It’s impossible to build a policy around something you don’t know or understand,” said Yaverski. 

“We strongly believe that AI will make the world better and make us more productive,” Yaverski said. “However, learning to write on your own and developing critical thinking still remain crucial to getting jobs.” He added that educators need to establish clear policies that don’t confuse students. Educators could use detection tools offered by edtech providers such as Turnitin as a way to open a discussion rather than a policing tool. 

Detecting AI use provides an opportunity to start a conversation with students on how the technology can be used constructively. Policies can differ across courses and departments but they should not contradict each other, he advised. “Where we want to move to is proof of process. So we’re not just saying how much AI is present in a paper but have students and educators look at how it was created in a way that can help students write their papers better,” Yaverski explained. 

The speaker:

  • Aaron Yaverski, regional vice-president of EMEA, Turnitin

Find out more about Turnitin.

You may also like

Design online assessment to prevent academic misconduct
Advice on designing online assessment that reduces the opportunity or temptation to cheat
sticky sign up

Register for free

and unlock a host of features on the THE site