The Depth of the AI Plagiarism Problem
Professors' assignment design should be guided by a frank and realistic look at how easy effective AI plagiarism is.
[image created with Midjourney]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
Let’s dive into why I think many professors need to take AI plagiarism more seriously.
Many professors are in denial about the depth of the AI plagiarism problem.
When I talk with other professors about student plagiarism of written work in the era of AI, I regularly hear expressions of confidence about the limitations of AI, like the following:
A. “Students are not aware of how they could use AI to plagiarize.”
B. “AI-written text is clearly distinguishable from student writing.”
C. “There are free online tools that can reliably detect AI-written text.”
D. “If I suspect a student of using AI, I can determine whether they are by comparing the suspected bit of writing with their other submitted work.”
E. “My assignments are designed in ways that make it impossible to use AI to complete them.”
Today, I will argue that each of these is false or, at least, that many professors should operate as if they are.
Students are already plagiarizing with AI. I suspect many of them are not getting caught. This has become much clearer to me this semester. Just this past week, I determined that 8 of my students plagiarized their mid-term papers, including 4 who used AI.