RIP Version History Tracking?
How AI immunity is evolving...

[image created with Dall-E 3 via ChatGPT Plus]
Welcome to AutomatED: the newsletter on how to teach better with tech.
In each edition, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
Last Week: I discussed some interesting findings from the Digital Education Council’s just-released Global AI Faculty Survey, from regional variations to faculty’s views on which uses of AI are most relevant to their work. Click here to read it if you missed it.
Today, I discuss two ways in which AI tools are getting more powerful, as well as what they mean for educators who need to assess students. There are still many options available to instructors but using version history tracking soon won’t be one of them…
Last Friday, I hosted January’s AutomatED webinar on “Building Better AI Tutors.” It went great! Thanks again to all who registered.
February’s webinar won’t have any entrance fee — the only such webinar of this semester — and will cover creating course content with AI. All it takes to sign up: a “Yes” response to the poll at the bottom of today’s newsletter.
Table of Contents

📬 From Our Partners:
AutomatED + Rundown = Profit
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
Remember: Advertisements like this one are not visible to ✨Premium subscribers. Sign up today to support my work (and enjoy $500+ worth of features and benefits)!
🤖 The Robots are Gaining Power
Last summer, I updated my Premium ✨Guide for preventing and discouraging student misuse of AI. I outlined six broad strategies for professors to prevent and discourage AI misuse in academic settings:
motivate students to use AI ethically and for learning through relationship-building and incentive-aligned course design
require students to complete work in AI-free zones, like with in-class assessments or with proctoring
design AI-immune assignments that students complete with access to AI
either increase AI immunity with format requirements (like an essay that is completed with version history tracking turned on in a Google Doc)
or increase AI-immunity with content requirements that focus on what AI struggles with (like meeting technical field-specific standards that the professor has determined to be challenging for AI to meet)
pair AI-vulnerable assignments with AI-immune verification tasks that check genuine understanding, like an oral exam completed without AI paired with a take-home essay (that they could have used AI to complete)
combine these approaches across one’s course (note: this is my recommended solution)
do nothing in cases where other pedagogical priorities take precedence or where the instructor simply needs to operate as if the students are working in good faith
Many of these strategies remain live options, but there are two growing challenges…
First, AI’s “intelligence” continues to increase, with a stepwise improvement coming with powerful reasoning models like o1, o1-pro, o3-mini (now available to ChatGPT Plus, Team, and Pro users, with free plan users able to “try OpenAI o3-mini by selecting ‘Reason’ in the message composer or by regenerating a response”), Gemini 2.0 Flash Thinking (available for free in Google’s AI Studio), and DeepSeek’s open-source R1 (also available for free from Microsoft in Azure). These AI tools narrow the range of assignments that AI cannot complete satisfactorily, and their high intelligence is widely available.
Second, AI’s ability to manipulate our environments continues to increase, with OpenAI recently releasing “Operator,” an AI system that can control computers just like a human would, to Pro users in the US. Anthropic previously released Claude’s “computer use” functionality in beta form (video demo here). Both can navigate websites, type in text fields, click buttons, and interact with computer interfaces in ways that are nearly indistinguishable from human behavior. (A deep dive on how Operator works is here.)
To illustrate the challenge to some AI immunity strategies, here’s a video I made of Operator writing a paper with the resultant version history (note: the video is sped up 3x to save you some time):
The resultant version history looks fairly authentic with little effort. Errors and misteps abound, but this is still a sign of things to come.
(LATE EDIT: And then, yesterday, OpenAI also released its own “deep research” tool to Pro users, which is an “agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you,” just like Google’s Deep Research but focused on tougher tasks. More on it next week...)
So, what do these developments mean for AI immunity for those of us looking for ways to get our students to complete some assignments or tasks without relying on AI in a problematic way?
Below, I explain where things stand.
(Of course, this is not to assume that there are no cases where AI should be used, but it is to recognize that there are reasonable pedagogical views where AI use needs to be curtailed, including for the development of fundamental skills like critical thinking or judgment.)
✨ What Instructors Can Do
One of my recommendations in the aforementioned Guide was to use Google Docs version history or similar tracking tools in Microsoft Word to detect AI misuse. The idea was simple — AI-generated content would typically appear in large blocks of pasted text, while genuine student work would show a more natural writing process with gradual development, revisions, and refinements.
Version history could reveal these telltale patterns, helping professors identify potential AI misuse and enabling them to create rubrics students to use the “correct” AI-independent process. (Sure, a student could type in something they produced separately with AI, pretending to write it themselves, but this would be annoying enough that many would be incentivized to not do it.)
This was part of a more general shift toward what some call "process verification" rather than "output verification." Instead of trying to detect AI use through the final product (which AI can increasingly simulate), we should focus on verifying the student's engagement with the learning process itself.
But tools like Operator fundamentally change this dynamic. They can simulate the natural writing process with remarkable fidelity by typing gradually, making occasional typos and corrections, taking breaks between sections, and creating version histories that look entirely organic. The "fingerprint" of AI use that we once were capable of relying upon is becoming increasingly difficult to detect.
What about the more powerful reasoning from o1 to R1? In short, as noted above, they reduce the domain of AI-immune content — content that AI struggles to produce. While there are some strategies one might want to try to carve out a space where they fail, but other moves are surer bets.
Let me explain.
Note: This rest of this section is visible to only ✨Premium subscribers. Thanks for your support!
📢 Quick Hits:
AI News and Links
1. ICYMI (“in case you missed it”): A new study in BJET by Yizhou Fan et al. reveals concerning findings about AI use in education: while students using ChatGPT showed better immediate performance on writing tasks, they exhibited "metacognitive laziness" — reduced self-regulated learning and deeper engagement with the material. The researchers found no difference in motivation between students using ChatGPT versus other tools, but those using AI were more likely to become dependent on it without developing transferable knowledge or skills.
2. The U.S. Copyright Office released Part 2 of its AI report on AI and copyright, concluding that:
Copyright requires human authorship and cannot protect purely AI-generated content
AI used as an assistive tool does not affect copyrightability
Current technology's prompts alone don't provide sufficient creative control for copyright protection
Copyright can cover human-authored elements in AI outputs and creative modifications/arrangements of AI-generated content
The Office found no need for new legislation or sui generis rights for AI-generated works, noting that providing such protection could actually discourage human creativity by flooding markets with AI content. The report represents the first comprehensive U.S. government guidance on AI and copyright.
3. The Allen Institute for AI announced Tülu 3 405B, an open-source AI model claiming to outperform DeepSeek V3 and GPT-4o on several benchmarks (note: these aren’t the reasoning models mentioned elsewhere in this piece).
4. A Times report reveals OpenAI is investigating whether Chinese startup DeepSeek violated its terms of service by using "distillation" — training new AI systems using data generated by OpenAI's models. The allegations come after DeepSeek's recent breakthroughs challenged assumptions about required computing resources. OpenAI's spokesperson said they're working with the U.S. government on protecting advanced models, while noting Chinese groups are actively working to replicate U.S. AI capabilities.
5. US President Trump announced a $500 billion "Stargate Project" to boost U.S. AI infrastructure, with SoftBank, OpenAI, and Oracle as key partners. Initial funding of $100 billion will expand to $500 billion over four years. As part of the deal, Microsoft's exclusive cloud arrangement with OpenAI will become a "right of first refusal" agreement. Trump also revoked Biden's 2023 AI risk reduction executive order. Fortune reports that Wedbush analysts suggest this could trigger up to $1 trillion in additional U.S. AI investments.
As I improve ✨Premium subscribers’ benefits, it can be hard to keep track, especially if you’re new to AutomatED. Here’s a summary…
Core Benefits
$0 monthly webinars on timely AI topics (last one: this past Friday, on “Building Better AI Tutors”; next one: on generating course content with AI, at the end of February), while everyone else pays $25
Access to all our practical Guides, including topics like AI-aware assessment design and student AI policies, and Tutorials on how to use AI tools, from using ChatGPT for analyzing teaching data to using LearnLM for tutoring
Discounts on our focused courses (like our new custom GPT course)
Annual 1-hour consultation to discuss your specific challenges
Premium-only sections in weekly newsletters
You can see all the benefits, including all the Guides and Tutorials, here.
Nearly 100 professors and learning specialists already use these resources to save time and enhance their teaching. Want to join them?
📝 Register for February’s Webinar
Want to learn more about on using AI to generate course content?
If so, just answer “Yes” in the below poll and you are thereby registered for my February Zoom webinar. I think it will be on February 28th from noon to 1:30pm Eastern, although this isn’t set in stone…
Want to register for Feb's webinar on using AI to create course content? |
If you select “Yes.”, that’s enough to register. You’ll get a follow-up email in a few weeks with more information.
What'd you think of today's newsletter? |

![]() Graham | Let's transform learning together. If you would like to consult with me or have me present to your team, discussing options is the first step: Feel free to connect on LinkedIN, too! |