How Bad is Academic Dishonesty Today?
How many students cheat, how AI complicates the picture, and what professors can do.
[image created with Midjourney]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
In this week, we explore the question: how many students cheat? We then look at how AI impacts the perennial issue.
AI seems to make academic dishonesty more rampant.
But how bad was the problem to begin with?
Without a sense of the baseline, one can’t get a sense of how much it promises to change the educational landscape.
On one level it’s hard to tell — obviously cheaters do not want to be detected.
Still, many studies have explored this question extensively. Unfortunately, it looks like most students cheat at least in a minor way. In their book, Cracks in the Ivory Tower, Jason Brennan and Phillip Magness helpfully summarize the data as follows:
Cracks in the Ivory Tower, page 216
Around 20 to 40 percent of students are habitual cheaters (cheating 3 times or more). Reporting the evidence from several surveys, James M. Lang notes:
In Bowers’s survey, 19 percent of respondents had engaged in at least three cheating incidents; McCabe and Trevino’s 1995 survey had that number at 38 percent; a different set of researchers reported in another survey from around the same time that 21 percent were three-time offenders.
In his book Cheating Lessons: Learning from Academic Dishonesty, Lang writes that he agrees with Tricia Bertram Gallant:
Whether the average twenty-first-century student cheats more or is less honorable than the average twentieth-century student cannot be said with certainty.
Cheating is a serious problem, but alarmist and declinist stories are likely misplaced.
(An aside: after noting the data on cheating, Brennan and Magness consider the question of why students cheat — and rely significantly on the work of Dan Ariely, some of whose “landmark studies” have been met with calls for retraction due to worries that his lab faked data.)
⚖️ 🤖 Honor Codes vs AI Nodes
With this baseline in mind, how should this shape our picture of AI in the classroom?
One interpretation is positive, at least with respect to the impact of AI: AI won’t make cheating that much worse because the problem is already pretty bad!
Another interpretation is less positive. Reports of cheating during schools when remote in the pandemic appeared to be much higher because the number of in-person assignments decreased significantly. In “A Critical Analysis of Students’ Cheating in Online Assessment in Higher Education: PostCOVID-19 Issues and Challenges Related to Conversational Artificial Intelligence”, Bubaš et al. note that teachers and students reported higher levels of cheating on online assignments in a number of different surveys.
This shows that students are sensitive to the falling costs of cheating. If it becomes easier to cheat, they will. No surprise to students of human nature.
Because AI tools will make it easier to cheat on many assignment types, one should expect more academic dishonesty. I haven’t seen many academic surveys of this, but anecdotally that is what we’ve seen and heard, hence our AI-immunity assignment challenge. It’s important to note that as LLMs become even better the cost of cheating may lower further still!
Returning to the positive side, we have some evidence about what works to dissuade many from cheating, even if not eradicate it entirely.
The work of Donald McCabe and collaborators argue that honor codes have a non-trivial impact in promoting academic dishonesty. That work is not without controversy, as James M. Lang argues that it’s the discussion that honor codes engender that is effective, if they are impactful at all. McCabe appears to agree when he writes:
The power of an honor code today appears to be directly related to how effectively students are oriented into this tradition and how much effort and resources a campus is willing to expend in working with faculty and students to institutionalize a code within its culture and keep it alive over time.
Crucially, different assignment types raise the cost of cheating. In-person assessments, oral exams, and more are impervious to AI.
An alternative approach: we can accept the dominance of LLMs in some kinds of assignments, with the understanding that we’re grading the joint output of a student and an LLM. (Their contributions could be produced in sequence — as in the case of what we have called “pairing” — or simultaneously.) Hence, teaching will involve not merely training in the subject matter, but AI expertise. As many kinds of knowledge work come to mirror this pattern, this may tighten the link between school and career for many students. Professionals will use LLMs, but LLMs will not fully replace them.
Finally, all of this technological transformation is new. I’m optimistic that we will find better solutions than what I’ve listed here. Indeed, we’re working on a new definitive guide on the matter — stay tuned.