Conceptualizing Solutions to the AI Plagiarism Problem
Thinking in general terms about the interface between pedagogical appropriateness and AI-immunity.
[image created with Midjourney]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
Let’s take a look at a framework for thinking about solving the AI plagiarism problem for written assignments.
Last week, I discussed some of the main ways that I have found that professors are in denial about the depth of the AI plagiarism problem.
Although I suggested that many professors should doubt that their written assignments are AI-immune – at least until they have grappled with the problem – I noted that it is possible to design these assignments so that they cannot be satisfactorily completed by AI. Today, we take a broad view of this and other solutions to the AI plagiarism problem.
To effectively deal with students plagiarizing take-home written assignments with AI, we need a good conceptual schema of the solution space.
A lot of discussion of solutions has been one-off and piecemeal. By generalizing our reflection on these issues, we get a better perspective on which attributes of potential solutions matter.
📈 🗺️ Two Dimensions
When we design an assignment for a course, there are a variety of factors that we should take into account that concern the pedagogical appropriateness of the assignment, including – but not limited to – the following:
the role of the assignment in the module in which it is found, as well as the course as a whole;
whether the assignment is appropriate given our students’ abilities and knowledge at the point in the module that it is assigned;
the amount of time that our students can be reasonably expected to have available to complete it;
how we expect our students to complete the assignment, including the software that they need to complete it and the formats in which their submissions should take; and
how we plan to grade or assess students’ submissions, as well as how we convey to students our expectations.
In general, assignments are more or less pedagogically appropriate depending on the extent to which they help students learn what they ought to be learning at a given point in time in a given professor’s course.
Crucially, pedagogical appropriateness is not the same as pedagogical effectiveness. There could be an assignment that is highly effective at helping students learn but not appropriate. For instance, they might be required to spend far more time completing it than can be reasonably expected given their other assignments. Such an assignment would be less pedagogically appropriate than others, even if it is more pedagogically effective.
Pedagogical appropriateness is highly complex and context-sensitive. As such, I do not intend to give or take myself to have given an exhaustive characterization of what it amounts to. However, I hope that the concept is sufficiently demarcated by these general terms. My goal is simply to develop a useful conceptual schema for thinking about solving the AI plagiarism problem. I encourage you to combine your own considered views about pedagogical appropriateness with this schema to improve your own outlook on solving the problem.
Now, just as we can think of pedagogical appropriateness as a dimension along which assignments can be ordered or ranked, we can think of another dimension that ranges from being completely immune to AI plagiarism to being capable of being plagiarized easily with AI. I will call this attribute of assignments ‘AI-immunity.’
Assignments are more or less AI-immune depending on the extent to which students can easily and reliably use AI to complete them (without being detected as plagiarized), such that the professor’s most demanding expectations are satisfied.
For example, if (i) a student can input an assignment’s instructions unchanged (i.e., verbatim as provided by the professor) into ChatGPT’s prompt field; (ii) the raw resultant output is always going to receive a perfect score from the professor; and (iii) the professor cannot detect whether the student plagiarized, then the assignment is not at all AI-immune. The student need not modify the prompt or tinker with ways of formulating it for ChatGPT, and they need not edit or revise ChatGPT’s output. Within mere seconds of receiving the assignment from the professor, they can get a perfect grade on the assignment and be at no risk of detection – all they need to do is be aware of ChatGPT and be willing to use it.
An assignment is more AI-immune to the degree that the student needs to work to modify the prompt before inputting it into the AI, to the degree that the AI’s output is not reliable in its quality relative to the professor’s expectations, and to the degree that the professor has reason to suspect the AI’s output as plagiarized.
We can create a graph of assignments that has these two dimensions as axes:
The Upper-Right Quadrant
Ideally, all of a professor’s most pedagogically appropriate assignments would be maximally AI-immune. But this is rarely the case. Assignments can be pedagogically appropriate but not at all AI-immune, and assignments can be maximally AI-immune but not at all pedagogically appropriate.
There are two broad strategies for a professor to consistently achieve AI-immunity while retaining pedagogical appropriateness, thereby locating more of their assignments in the upper-right quadrant: