The Depth of the AI Plagiarism Problem
Professors' assignment design should be guided by a frank and realistic look at how easy effective AI plagiarism is.
[image created with Midjourney]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
Let’s dive into why I think many professors need to take AI plagiarism more seriously.
Many professors are in denial about the depth of the AI plagiarism problem.
When I talk with other professors about student plagiarism of written work in the era of AI, I regularly hear expressions of confidence about the limitations of AI, like the following:
A. “Students are not aware of how they could use AI to plagiarize.”
B. “AI-written text is clearly distinguishable from student writing.”
C. “There are free online tools that can reliably detect AI-written text.”
D. “If I suspect a student of using AI, I can determine whether they are by comparing the suspected bit of writing with their other submitted work.”
E. “My assignments are designed in ways that make it impossible to use AI to complete them.”
Today, I will argue that each of these is false or, at least, that many professors should operate as if they are.
Students are already plagiarizing with AI. I suspect many of them are not getting caught. This has become much clearer to me this semester. Just this past week, I determined that 8 of my students plagiarized their mid-term papers, including 4 who used AI.
📖 🔎 The Paradigms Before AI
As a bit of background, here are the main types of plagiarizers who can be caught without technological assistance:
The Red Handed Thief: The student who is caught in the act of copying from a source or relying on someone else to write their work for them. “Uhhh, sorry Professor! I didn’t mean to! I promise!”
Detection Method: The professor observes the student plagiarizing (or gets testimony about plagiarism from an observer), and the student admits it.
The Armchair Magician: The student whose writing displays skills, expresses facts or references, or more generally takes forms that are not plausibly their own, all without citation. “Whether ’tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles and, by opposing, end them.”
Detection Method: The professor compares the student’s work to a general standard – either a citation standard or a standard of expectation for students of the relevant type – and determines that the student has plagiarized on the basis of a mismatch.
The Chameleon: The student whose writing style, written content, displayed skills, etc. vary too widely across different parts of one submission or across different submissions. Shakespeare one paragraph, Elmer Fudd the next.
Detection Method: The professor compares the student’s work to itself and determines the student has plagiarized on the basis of a mismatch.
The Amnesiac: The student who writes something suspicious or dubious and who cannot defend themselves when pressed (or otherwise display inconsistent ability). “I don’t know how, Professor, but I was an A student when I wrote that paper and today I can’t dot my i’s.”
Detection Method: The professor notices potential issues and then queries the student in ways that reveal plagiarism.
The Sad and Confused: The student who submits written work that does not match the assignment instructions and/or course content more generally. “I thought we could write on anything [remotely] related to the course that we wanted to, Professor. Honest!”
Detection Method: The professor realizes immediately that the submission isn’t appropriate for the assignment, and then has reason to suspect that it has been plagiarized, especially if the assignment is novel but the domain of the course is not.
With the help of technology, plagiarizers of two further types can be detected:
The SparkNoter: The student who copies text from one or multiple searchable websites (e.g., SparkNotes, CourseHero, Wikipedia) to write part or all of their submission.“I was just using the encyclopedia entry as a mere guide, Professor, even though 75% of my paper mirrors it in content, structure, examples, and writing style.”
Detection Method: The professor uses a search engine and quotation marks to search for exact matches from the student’s submission to see if those bits are online somewhere verbatim, or they use services like TurnItIn to do it for them. Once they find a source, professors can then compare it to the student’s writing to see if some of the other parts of the source are similar (though not identical) to other bits of the student’s submission. The professor can rinse and repeat to find other sources, in case the student plagiarized from multiple websites (again, TurnItIn automates some of these processes).
The Strange Coincidence: The student who copies text from a non-searchable website or from another student’s submission (from the same or another class) to write part or all of their submission. “I don’t know what you are talking about, Professor! I worked really hard on that assignment.”
Detection Method: The professor must either rely on having seen the same text before (very unlikely) or rely on a service like TurnItIn. The latter compares student submissions to online sources as well as prior submissions to TurnItIn’s service. Since the submission is plagiarized from a non-searchable website, TurnItIn simply alerts the professor that they need to investigate further due to suspicious parallels between the submission and others from the past.
One crucial thing to note about these plagiarizers and detection methods is that some of the detection methods provide strong evidence of the plagiarism while others do not.
As noted, some methods require follow-up interventions and some compliance from students to result in strong evidence (including admissions of guilt). The professor is always seeking to shrink this gap between detection and proof. Mere suspicion without sufficient evidence is not enough to allege that a student has plagiarized.
Unfortunately, AI stretches the gap between the detection of plagiarism and the proof of plagiarism significantly.
👻 F. None of the Above
AI tools like ChatGPT are all over the news. However, even if the bulk of university students do not consume traditional news, they do use social media, including TikTok and YouTube, and there are countless viral videos on these sites that explain how to effectively plagiarize with AI.
Thus, A is false from my list above – many students are aware of AI and its applications to plagiarism are quickly becoming widely known.
To make matters worse, the techniques for using AI to plagiarize that are presented on social media are immune to all of the above detection methods.
In the course of showing why, I will address those professors who endorse versions of B, C, D, and E from my list.
Suppose that a student is working on a take-home essay assignment. Suppose, too, that the essay assignment is novel to their professor’s class, in that it requires them to address a combination of issues, topics, texts, etc. that are not so combined or addressed anywhere in a single accessible source.
If the student seeks to plagiarize, they could use the traditional method of copying from a range of accessible sources while adjusting the copied text to avoid getting caught by their professor using search engines or a service like TurnItIn. This is either very time-consuming, risky, or both. So, instead, they could ask a friend to write a unique essay for them or pay someone to do so. The first is fairly unlikely – in general, students’ friends aren’t going to write their essays for them – and the second is expensive. Enter AI.
First, the student inputs into ChatGPT their professor’s prompt, as well as a variety of additional instructions (i.e., summarize the context from their perspective as a student in this professor’s class doing this assignment). They can repeatedly ask ChatGPT to write their essay for them, making adjustments after each iteration, until they get a result they like. They can even ask ChatGPT to produce text in a way that it normally does not – many of the aforementioned videos report that this sort of instruction works.
But the story doesn’t end there. Suppose the student is worried that they still might get caught. They can intersperse their own writing in the AI-generated essay (replacing sentences or clauses throughout), and then run the result through a variety of other AI tools, such as the Quillbot paraphraser. These tools take text given to them and rewrite it in a novel way while retaining much of the same meaning.
Next, the student can edit the resultant text in order to introduce errors in spelling, grammar, and content (AI generally doesn’t make these sorts of errors), as well as to add citations and quotes from the course materials (AI generally is unreliable with citation). Finally, they can run the text through all of the free online tools for checking for AI use to ensure that their essay will not trigger any alarms.
C from the original list is clearly false of this student’s essay; no tool can detect that they have plagiarized. For parallel reasons, B is likely false, too, unless the professor somehow has access to a mental AI detection tool of great reliability. (If you’re out there, please come forward.)
In either case, the gap between detection and proof is problematic. The developers of some of the best AI detection tools admit that they “should not be used as a primary decision-making tool” because of their unreliability. As a result, the student can simply deny that they have plagiarized, and the professor has no recourse.
Since there is nothing preventing our hypothetical student from using these methods for all of their take-home writing assignments, D is at risk, too. Unless the student’s professor has a “safe” set of past assignments that the student could not plagiarize in this way, there is no baseline to which the professor can compare this essay.
All that remains is E, which stems from the thought that professors can reliably create assignments that are AI-immune. Certainly, some assignments are AI-immune (I will be discussing ideas for them over the next few months). However, most are not, including many of those (a) that professors believe are AI-immune and (b) that are pedagogically superior or ideal. There are many cases where professors tell me E, only for me to ask them about their assignments and to find out that they are far from AI-immune.
In short, professors should assume F: none of the above, at least until they have good reason to think that they have grappled with the depth of the AI plagiarism problem. We must adjust.
Next week, we will turn to that topic…
Check out Poe. It’s a playground that includes different large language models like ChatGPT. ChatGPT has started charging for premium and, if you’ve been using it, you know that the free level occasionally has outages (because it is so popular). So far Poe is free and fast. It will also let you play with other high-quality models.
Sentient Syllabus wrote a proof of concept for personalized instruction with AI. Here it is. “Personalized education has huge potential for learning outcomes, and generative AI may make this a reality. Even the current version of ChatGPT gives educators access to individualized contents, without requiring additional software, costly consultants, or the involvement of third parties…
Such personalized assignments contribute to students’ learning, enhance their agency, and provide valuable opportunities for formative feedback.”
This is just the beginning. Just as AI has created significant problems for teachers, it has also created opportunities, as we will be discussing in this newsletter.
Something Fun: GPT-4 In Their Feelings
As people wait for the new and improved model of ChatGPT, Dan Shipper notes that “Everybody always asks: ‘what is GPT-4?’ No one ever asks: ‘how is GPT-4?’” (Click for an image of GPT-4 created by Midjourney.)