✨Guide: How Professors Can Discourage and Prevent AI Misuse
The past two years of our research is gathered into a guide for professors for the 2024-2025 school year.

[image created with Dall-E 2 via Bing’s Image Creator]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
In this fortnight’s Premium edition, I present a guide for how to discourage and prevent AI misuse, at least as things stand late in the summer of 2024 (when this Guide, originally posted in August 2023, was last updated).

Last year, we tested take-home assignments for our AI-immunity challenge — experimenting with them to see if we can crack them with AI tools alone in an hour or less. I have been testing my own assignments, experimenting with all sorts of AI tools, and reading about others’ methods. We researched AI detectors and then I followed up on the topic with the news of OpenAI’s supposedly 99.9% accurate detector.
I have been collaborating with educational researchers to learn more about best practices for oral assessments and in-class dialogue, and I reported on my (very positive) experiences with oral exams at the end of the year.
I have been thinking seriously about how to design assignments and assessments in the age of AI, as well as how to teach students to use AI. I have been consulting with professors and learning specialists about how to build custom GPT tutors and how to improve the grading/feedback process with AI.
In this piece, I build on these experiences to present a comprehensive Guide to discouraging and preventing AI misuse by university students. Much of what I write is intended to be general and somewhat timeless, but I do expect some of the relevance of this guide to diminish as time goes on — this is my perspective of things stand late in the summer of 2024.
I am often told by professors that they would appreciate a zoomed-out take on this issue, rather than a piecemeal or partial approach. Here it is, with the main options available to professors at the present moment.
Note: This is a Guide focused on AI misuse, but it does not assume that AI use is always misuse (hence my Guides on positive uses of AI). The idea is to provide you with a conceptual framework and practical strategies to deal with AI misuse, regardless of what you think it amounts to. If you think there are cases where students misuse AI by, say, taking shortcuts or failing to meet your learning objectives, think of those cases as you read this Guide.
🖼️ The Big Picture
There are six broad strategies you can take to discourage and prevent AI misuse by students on a given assignment:
1. Motivate students to not misuse AI in completing the assignment.
2. Require students to complete the assignment without access to AI.
Because AI can be accessed easily on a device connected to the internet — and even on those that are not, like via locally run LLMs — this leaves two device-free options:
Develop an in-class handwritten version of the assignment.
Develop an in-class oral version of the assignment.
However, in some contexts, secure online proctoring is available as an alternative, as I will discuss below.
3. Allow students to complete a (more) AI-immune version of the assignment with access to AI.
Developing an assignment to be more AI-immune is a complex and ever-evolving process, but there are two broad categories of options:
Develop an assignment that is AI-immune due to its format.
Develop an assignment that is AI-immune due to its content.
4. Pair the assignment with another assignment that students must complete in an AI-free zone, such that they are incentivized to achieve the learning objectives in both cases.
Conceptualize pairing like you conceptualize how you might try to limit students’ ability to rely on their peers’ expertise at their dorms and in the dining hall.
Sure, they can ask their clever friend about how to solve a problem or write an essay, but then they need to come to class and perform with those skills and that knowledge internalized (and not merely memorized).
5. Do nothing.
6. Some combination of the above.
Let’s discuss these in sequence…
Table of Contents
Option 1 - 🍎 An Ounce of Discouragement
…is worth a pound of prevention.
Or so Professor Paul Blaschko argued in our interview with him. According to Blaschko, one of the central goals of the educator is to convince students of the extrinsic and intrinsic value of what they are learning. A teacher’s job is not just to convey facts, information, or knowledge, but rather to inculcate in their students the motivation to care about their subject matter, to find it interesting, and to value learning it the right way — both for what it is in itself and what it will bring them. The result of a professor successfully completing this motivational task will be students who do not want to misuse AI or, more generally, cheat.
The standard ways you might change the motivation paradigm include creating assignments that relate your course’s content to your students’ lives or building personal relationships with your students so that they care about your courses.
While there are some students, some professors, and some contexts where this strategy may be less feasible than others, it is surely the first line of defense for many professors. Besides, we should be working to motivate our students in these ways more generally, AI misuse aside.
A more radical way to alter the motivation paradigm entirely is “ungrading,” which has slowly gained some popularity — and notoriety — in recent years. In short, ungrading is a broad family of pedagogical practices that deemphasize or remove graded assessment. You can read about it ad nauseum elsewhere (see here, here, here, or here).
In whatever way you implement ungrading, the core idea would be that it would discourage AI misuse because it disincentivizes students from wanting to cheat or plagiarize in the first place, as Emily Pitts Donahoe has discussed.
The concern, of course, is that it disincentivizes them in bad ways, too, but that’s a debate beyond the purview of this piece. Nevertheless, it is worth considering as an option.
Another option would be to have an AI detector policy — a policy that is conveyed to your students that, among other things, informs them that their submissions for your assignments will be evaluated by AI detectors. Even if you cannot get students to disvalue AI misuse by encouraging them to value what they learn/gain from being honest and earnest, you can at least use the risk of getting caught to disincentivize their AI misuse.
However, as I have discussed last year and then this summer, there are issues with this latter approach.
First, AI detectors are a “black box” of sorts, just like AI tools themselves, in that they cannot provide the same sort of transparent evidence for their claims that prior forms of AI plagiarism detection tools could. This may change with some new methods under development (as I discussed in August), but we aren’t there yet.
Second, AI detectors vary in their reliability, both at a time and in general, and it is not clear which ones are reliable or who their unreliability might affect most (e.g. those who do not speak English as a first language).
Third, you need to remain cognizant of and compliant with institutional policies governing AI detection at your university. You may not have the freedom to develop your own policy.
Option 2 - 📝 An AI-Free Zone
Moving to the second option on the list above, I will now discuss ways to require students to complete assignments without access to AI.
There are three broad methods here:
Online Proctoring (Software and Human)
There are some options for online assessment that attempt to limit students’ access to AI tools — and/or the internet — during remotely administered online assessments. Some of them involve software/AI solutions to prevent AI misuse, while others involve proctors watching via Zoom or reviewing recordings. One benefit of the COVID-19 pandemic is that it forced many universities to more seriously experiment with these solutions, and there are now many insightful research studies analyzing the results.
For instance, in a large-scale study of Tel Aviv University’s efforts during the pandemic, authors Patael et al. survey the literature surrounding online assessment and provide a thorough discussion of the opportunities and challenges (see section 2 for a summary; for other useful discussions, see here and here). Even after noting the significant concerns about online proctoring in general, they describe Tel Aviv University’s efforts and conclude as follows:
This study described the procedures for online-proctored examination implemented by TAU University in the Fall 2020/21, and reported its assessment by TAU students and faculty. The described procedures were born out of necessity triggered by the COVID-induced lockdown. Nevertheless, it appears that the protocol employed for conducting proctored assessments remotely can serve as a fertile foundation to provide a direction for a viable alternative to traditional university models of fully on-campus examinations, as the institutions for higher education move towards normalizing online assessment in the post-pandemic environment. We demonstrate that even in a challenging learning context—such as the one triggered by the COVID-19 pandemic—online videoconferencing technology can be used effectively as a quality assurance mechanism to carry out proctored examinations remotely. […]
Moreover, this research demonstrated potential advantages of a well-designed use of the Moodle and Zoom, for conducting proctored examinations remotely, allowing us to provide some key recommendations for institutions, examiners and students. […] The good news is that the institutions for higher education should not throw the baby out with the bath water and revert to the fully f2f mode of proctored assessments once the pandemic-related restrictions are lifted and the students are allowed back on campus. In fact, paradoxically, the COVID-19 pandemic might have acted as the catalyst to create a disruption form the prevailing model.
If you are interested in this option, its viability in your case depends on whether and to what degree your institution has the technology and policies in place to accommodate your plans, as well as the fit between your assignments and the technologies required.
In-Class Oral Assignments
A second option is to move assignments into the classroom where students cannot access their devices (at least during certain times).
If students lack access to their devices, then — setting aside performative assignments like those involved in athletics and dance — there are two ways they can complete assignments: orally and via handwriting.
We have already discussed at some length the pros and cons of oral exams in a piece from last year (see other discussions here, here, and here), and I followed up with my positive experiences this past winter. Last year, I also co-authored a piece in Inside Higher Ed with experts from the Constructive Dialogue Institute on best practices and ideas for in-class dialogue-based assessments.
But one question I have received after we posted the piece on the pros and cons of oral exams was this: supposing I wanted an oral assignment to play a central role in my class, what do you recommend?
My answer revolved around my own experiences with oral assessments which are always paired with written assignments, so I will wait to discuss them below in section 4.
In-Class Handwritten Assignments
This option is self-explanatory — and others have argued for its viability in other contexts (see here) — but there are a few considerations worthy of your attention.
First, remember that many of today’s students write primarily with their devices (whether by typing or dictating), so completing assignments by hand can be relatively foreign method of composition. Second, there are issues with handwriting variability, including issues with objectively assessing the content it expresses. For instance, there is some evidence that graders assess a sentence, paragraph, and paper differently based on the legibility of the handwriting used to express it. Joseph Klein and David Taub report that “there are significant differences in the manner of evaluation of compositions written with varying degrees of legibility and with different writing instruments” (see here).
The only systematic and recent literature review of handwritten examinations I have found is that of Cecilia Ka Yuk Chan, who concludes that there are four “unambiguous advantages” of typed exams: “alignment with industry practice,” “easing examination anxiety," “legibility,” and “efficiency in grading and feedback.” Likewise, there are five “unambiguous disadvantages“ of typed exams: “[un]fairness,” “unsuitable for all disciplines and examination types,” “accessibility,” “[learning] environment disturbance,” and issues related to “technology adoption and training.” And there are many “ambivalent factors” that Chan notes, as well as issues with stakeholder’s perceptions. Chan concludes with the following mixed and cautious conclusion:
In conclusion, the shift from handwritten to typed examinations carries both opportunities and challenges for educational institutions. By thoroughly assessing the advantages and disadvantages of each format, carefully considering the needs of various stakeholders, and adopting a proactive and well-planned approach to implementation, higher education institutions can maximize the benefits of typed examinations while minimizing their potential drawbacks. This, in turn, can contribute to a more efficient, fair and inclusive assessment environment for all students and faculty.
If you are interested in this option, it is worth a look at Chan’s paper, in order to consider each of the factors Chan discusses (and the literature Chan cites for each) in relation to the assignment you are interested in.

Option 3 - 🛡️ AI-Immunizing Assignments
If you cannot sufficiently motivate students to not misuse AI in general and you cannot move a given assignment to an AI-free zone, then one option is to increase the AI-immunity of that assignment. (Another option is to leave it susceptible to AI misuse — were it standalone — but pair it with a second AI-immune assignment, an option which we will consider next.)
There are two broad strategies here:
Develop an assignment that is AI-immune due to its format.
You could have students handwrite their submission. You could have them write a paper with a word processor, print it, and mark it up. You could have them record their submission orally (though AI voice generators are available and will soon be ubiquitous). You could have them make a video (though AI video generators are becoming more and more powerful). You could have them screenshot their sources (note: LLMs with search capabilities can locate source texts with more and more reliability — with Perplexity leading the pack and SearchGPT on its heels — but most cannot provide screenshots of them without a lot of extra work).
All of these methods attempt to leave unchanged the cost of an honest student expressing their thoughts, ideas, arguments, analyses, or calculations while significantly increasing the cost for a student who wants to misuse AI to do so. Whether a given format affects the cost in this way for a given assignment depends on many context-specific details.
One popular way to carry out this strategy would be to require students to turn in submissions that allow the grader to review the version history of their submission in Microsoft Word or Google Docs. What this version history will reveal is a pattern of creation. If, for instance, a student pastes in a swathe of an essay from another source, then it is more likely that this bit was produced by an AI tool (or another source). In short, an honest and earnest submission is created by a different sort of process than a dishonest one — and your job as a professor would be to differentiate the two to distinguish AI misuse via publicly stated rules that students are aware of in advance. (For discussion of using Google Docs’ version history in this way, see here.)
There are some issues with this way of carrying out this strategy. First, students can forget to turn on version history, misunderstand the functionality, have technological issues, etc. Second, all of these format-based strategies can be beaten by a student transcribing or reading AI tools’ outputs. Whether version history is turned on or not, I can have a separate tab open on my browser with ChatGPT’s outputs, and I can type them at an irregular pace into my Google Doc, deleting periodically, etc., thereby simulating an honest process. Third, there are serious issues with the standards of detection — namely, lots of grey area cases where it is not clear whether a student was dishonest or not. Fourth, it takes a lot of time to review the tangled web of tracked changes or the version history of a Google Doc.
With that said, again the core idea is to make students submit their response to the assignment in a format that disincentivizes them from misusing AI, not make it impossible.
AI-immunity is not binary; it is a spectrum upon which assignments can be placed, from less to more. So, even if this strategy is vulnerable to some methods of circumvention, most of them involve the student working harder to be dishonest. Thus, they increase the AI-immunity of the assignment.
Develop an assignment that is AI-immune due to its content.
There are many takeaways from our AI-immunity challenge, which focuses primarily on the content of assignments that make them more or less AI-immune. (By ‘content’, I mean the subject matter that the assignment concerns, rather than the manner in which it is presented.) In the case of the Clinical Research exam we tested, several aspects of its content made it more AI-immune:

Subscribe to Premium to read the rest.
Become a paying subscriber of AutomatED to get access to this post and other perks.
Already a paying subscriber? Sign In.
A subscription gets you:
- • Access to All Tutorials and Guides
- • Two New Premium Pieces per Month
- • Free Access to Monthly Webinars
- • Access to Exclusive AI Tools
- • Discounts on Courses
- • One Free 1-Hour Consultation with Dr. Clay