Logo

Is it time to turn off Turnitin?

In this extract from their new book, ‘Teaching with AI: A Practical Guide to a New Era of Human Learning’, José Antonio Bowen and C. Edward Watson discuss the reliability of AI detection tools and how to combat cheating without them

,

American Association of Colleges and Universities
29 Apr 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A robot drops papers as it's caught in a flashlight beam
image credit: iStock/Moor Studio.

You may also like

ChatGPT as a teaching tool, not a cheating tool
3 minute read
Students comparing notes and work in pairs

There has always been cheating. Faculty have often not wanted to know about it. Few of us want to focus on policing, and we (yes, faculty) were the oddball students who were genuinely motivated by the material. The internet made cheating easier, as did portable and wearable internet devices. Most college students have long admitted to cheating in some form, and the top reason for cheating is “there was an opportunity to do so”. It’s no surprise that cheating with AI has become a top concern for faculty.

The University of Pennsylvania’s annual disciplinary report found a seven-fold (!) increase in cases of “unfair advantage over fellow students”, which included “using ChatGPT or Chegg”. But Quizlet reported that 73 per cent of students (of 1,000 students, aged 14 to 22 in June 2023) said that AI helped them “better understand material”. Watch almost any Grammarly ad (ubiquitous on TikTok) and ask first, if you think clicking on “get citation” or “paraphrase is cheating. Second, do you think students might be confused? 

Most students seem to recognise the ethical considerations and risks of using AI, but two recent peer-reviewed studies found that most students said they will continue to use AI anyway. In the spring, 51 per cent of students said that they will continue to use AI even if their instructors or institutions prohibit it.

The motive is clear: 12 per cent of student ChatGPT users say it increased their GPA, reporting an average 2.9 GPA in autumn 2022 (OpenAI was introduced on 30 November 2022) and 3.5 in spring 2023.

Detecting the detectors

There is a cyber-race to create AI detectors, but determining their accuracy is complex. It is hard to untangle false positive rates, conflicts of interest (inherent in vendor-sponsored studies), how students attempt to avoid detection, digital inequity concerns, and whether this is how faculty ultimately want to dedicate their energy. As both tools and usage change (rapidly), tests need to be redone, and AI detectors are already having to revise claims. Turnitin initially claimed a 1 per cent false-positive rate but revised that to 4 per cent later in 2023. That was enough for many institutions, including Vanderbilt, Michigan State and others, to turn off Turnitin’s AI detection software, but not everyone followed their lead.

Detectors vary considerably in their accuracy and rate of false positives. One study looked at 14 different detectors and found that five of the 14 were only 50 per cent accurate or worse, but four of them (CheckforAI, Winston AI, GPT-2 Output and Turnitin) missed only one of the 18 AI-written samples. Detectors are not all equal, but the best are better than faculty at identifying AI writing.

One response to all this is to try to stop the use of AI on campus. Some campuses have blocked all AI websites (eliminating their legitimate use for faculty, recruitment and other campus services, too). Not all students object. Campus bans on the internet, however, were short-lived. 

Even with good AI detection software, you still need to consider the programs’ effectiveness and the implications of false positives. Even if the rate of false positives is low, are you comfortable with that level of collateral damage, and which student subpopulation it is most likely to hit? The time and effort that you and your institution will devote to finding AI is another consideration; most of us did not grow up wanting to be enforcement officers. 

If you or your institution still want to proceed with AI detection tools, remember that they do not accuse students – faculty do. Detectors only provide a probability score. Faculty will need training and everyone will need to consider what to do with the results.

Low-effort cheating interventions

There are, however, some easy things that we can do to reduce cheating. A detailed paper with rich data from the department of computer science at the University of California, Riverside found that these six low-effort interventions made a difference:

  1. Discuss academic integrity

    Or better, just discuss integrity and why it might matter both in school and the workplace. A discussion about what AI can and can’t do and what policies might be fair for all students is both motivating and effective.

  2. Give an integrity quiz

    Requiring a 100 per cent score to continue in the course will influence students to read and consider the policies, definitions and consequences. Again, this is best done just before the first assignment and not on the first day. Being explicit about the names of student aid providers (like Grammarly and Chegg) and AI websites and what activities are allowed is important.

  3. Allow students to withdraw submissions

    We know that workload and deadlines increase the temptation of shortcuts. A “regret clause” allows students who might be feeling pressure just before a deadline to reconsider (in the next 24 hours, for example) and take a zero on the assignment they have already submitted. This study found that 10 per cent of students took this option at least once during the semester. The University of Maryland also reported a record number of students reporting their own misconduct this year.

  4. Remind students about academic integrity

    As the semester wears on, students forget and priorities change. Remind students of both the benefits of doing the work and the penalties for cheating.

  5. Demonstrate detection tools

    Students suspect you might be bluffing or that the tools might not actually work. A demonstration tips the balance a little towards the threat of being caught.

  6. Normalise help

    Everyone gets stuck, but not everyone knows that everyone else gets stuck, too. We know that a culture of acknowledging difficulty combined with support is the secret sauce of student success, but it needs to be reinforced routinely. Point students to resources and acknowledge challenges constantly. 

José Antonio Bowen is the author of Teaching Naked: How Moving Technology Out of Your College Classroom Will Improve Student Learning and Teaching Change: How to Develop Independent Thinkers Using Relationships, Resilience, and Reflection, and the former president of Goucher College. 

C. Edward Watson is the associate vice-president for curricular and pedagogical innovation at the American Association of Colleges and Universities.  

Extracted from Teaching with AI: A Practical Guide to a New Era of Human Learning by José Antonio Bowen and C. Edward Watson. Published with permission of Johns Hopkins University Press. 

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

 

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site