Releasing the AutomatED Feedback Accelerator
Plus, free Premium, and news from Claude and Meta.
[image created with Dall-E 3 via ChatGPT Plus]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
In this week’s piece, I explain a shortcoming in last week’s custom GPT feedback strategy, I release a new AutomatED GPT (the “Feedback Accelerator”) that addresses this issue and that Premium subscribers can use, and I call for more submissions for our AI Assignment Challenge (note: submitters get free Premium).
Table of Contents
✨ A New AutomatED GPT:
the Feedback Accelerator
In last week's piece, I provided a comprehensive how-to on using ChatGPT to grade faster via custom GPTs. I encourage you to read that piece if you haven’t already, as it is very detailed, with lots of lessons for custom GPT creation more generally.
The specific custom GPT I outlined there — that can be mirrored in prompts for users who don’t want to purchase ChatGPT Plus — has three virtues when it comes to improving your feedback workflow:
It helps you grade 2-3x faster.
It enables you to give students substantial customized feedback.
It does not have access to any of your students’ data.
However, since last week, I have continued to use my custom GPTs — just like the one I outlined — to give more feedback, and I have identified an area where the instructions need improvement.
The issue arises from how the custom GPT handles specific feedback. While this custom GPT is designed to polish the professor’s rough notes on the strengths and weaknesses of students’ work into cohesive comments, I noticed that it occasionally altered my specific feedback in ways that didn't respect my original meaning. This is fine with generic feedback (“too many grammatical errors”) but not fine with feedback that I take the time to customize heavily to the student’s submission and that depends on nuances unique to my field.
The bit of the custom GPT’s instructions that was intended to handle this issue was found in the instructions for Steps 2 and 3 from my piece from last week. Here is the relevant part of the Step 2 instructions for my exemplar Reading Response #2 custom GPT:
Take the numbered notes from the user on the strengths of the student's Reading Response #2 and convert them into short unnumbered paragraphs. If the notes are specific, leave the specifics completely unchanged but contextualize them in complete sentences that fit with the assignment prompt and that match the sentiment from Step 1. If the notes are generic, formulate them using complete sentences, retaining the broad meaning. Then, proceed to Step 3.
The solution to this limitation is to provide more failsafe instructions to the custom GPT on how to handle specific vs. generic feedback, leveraging symbols like quotation marks to demarcate specific feedback, along with instructions on how to parse them.
The lesson here is that we need to “speak the language” of the custom GPTs, using syntax that they can parse more reliably. This is the same reason that I use markdown tags (#, ##, ###, …) to flag the hierarchical structure of my instructions.
I have now implemented these improvements to great effect with my own custom GPTs, and I recommend that you experiment with various methods of doing so with your own.
I have also produced a new AutomatED custom GPT that incorporates these improvements that you can use, if you have ✨Premium, to speed up your feedback process: the Feedback Accelerator.
For example, if you give it this prompt (note the specific feedback in quotation marks and how the custom GPT retains/incorporates it):
It will produce this output:
Here is the link, visible to only ✨Premium subscribers. (Remember: Premium is only $8/month and comes with many other benefits, in addition to supporting the creation of our free content. You can also get free Premium if you volunteer for our challenge, noted below.)
I am working on two other custom GPTs that I plan to release this summer. And, as always, our Course Design Wizard is publicly available to all, in case you would like help designing assignments, rubrics, and syllabus AI policies that are sensitive to the opportunities and challenges of the AI era.
📢 Quick Hits:
News Tidbits for Higher Educators
Claude launched two significant updates aimed at enhancing collaboration and mobility for users: a Team plan and an iOS app. The Team plan provides access to all Claude 3 models (including Opus), tools for user and data management, and increased usage caps. The free iOS app extends the reach of Claude’s capabilities to mobile devices, allowing for seamless synchronization with web chats and easier integration of Claude’s vision capabilities with your phone’s photos.
Why it matters: Claude’s 200K context window significantly surpasses the capabilities of many AI models, like ChatGPT4's 128K limit (only Gemini 1.5 Pro, with 1000K, is ahead), facilitating deeper and more coherent analysis for complex tasks such as research and grant writing. Many users also report that Claude 3 Opus is the best LLM for high-quality writing. So, the Teams plan may make sense for research collaborations, teams of graduate teaching assistants, or administrative units. Upcoming features, including ”reliable source citations for AI-generated claims,” integration with data repositories (codebases and CRMs), and collaboration features for “iterating with colleagues on AI-generated documents or projects” may make Claude Team pull ahead from ChatGPT Team, especially if they release an analog to custom GPTs.
Meta has introduced multimodal features using “Meta AI with Vision” in all Ray-Ban smart glasses. These glasses — already equipped with a camera, microphone, and speakers — can interpret visual and audio inputs to provide real-time information and responses to the wearer (like answers to “What am I looking at?”). And they are getting decent reviews, at least relative to the less favorable reception of AI wearable pins like the Rabbit R1 and the Humane AI Pin.
Why it matters: It is still early days, but smart glasses like these could have significant effects inside and outside of the classroom. Students could use them during lectures, providing contextual information without disrupting the flow of the lesson, or on field trips, allowing for instant visual identification of objects or historical sites. They could also translate foreign languages in real-time during study abroad programs. Additionally, the potential for integrating AR into these glasses could allow for immersive, interactive learning experiences where students can visualize complex academic concepts in real-time, from molecular structures in chemistry to architectural designs in engineering. However, the technology also poses challenges, including privacy concerns with continuous recording capabilities, the need to ensure reliable information, and the risk that students will use them during assessments.
Other Links
🏆 The Return of the AI Assignment Challenge
A month ago, in honor of the one-year anniversary of our (in)famous “AI-Immunity Challenge” and our 3000th subscriber, I announced a new contest.
Here’s what I wrote at the time:
Professors: submit your best AI-immune take-home writing assignments for us to try to complete in less than an hour, using the latest generative AI tools! Let’s see what you’ve got!
In recent months, I have been attending conferences and discussing AI with hundreds of professors, and I am finding many professors are still skeptical about the ability of AI to complete their assignments. With AI tools gaining power every month, we need to put their — your? — confidence to the test once again!
But why? In short, we professors need to have a good grip on what AI tools can do, or else our pedagogical methods will be problematically insensitive to the opportunities and challenges they present. (See our ✨Premium Guide for how to plan and develop assignments and assessments in the age of AI for more on this dynamic.)
Over the past month, I have gotten several submissions and several judge volunteers (see below), but we need more! Please, after you finish your spring grading, volunteer to provide your assignment or to be a judge.
Here are the rules…
Prizes and Rewards
For Submitters: Any professor who simply submits an eligible assignment and rubric will get one free month of ✨Premium.
If AutomatED Loses: If we get a C or worse on a professor’s assignment — by the terms of their own rubric, as judged by an independent judge expert in their field — then the professor will get one free year of ✨Premium and I will post on my LinkedIN an admission of AutomatED’s “loss.”
If AutomatED Wins: If we get an A or a B on a professor’s assignment, then the professor will post on one of their professional social networking accounts an admission of AutomatED’s “win.” (Alternative options are available if this isn’t an option.)
For Professors Who Volunteer to Judge: Any professor who volunteers to judge submissions in their field of expertise enters a raffle for one free year of ✨Premium.
For Professors Who Judge: If such a professor is called to judge a given submission and offers their judgment on it (in alignment with the rubric), then they get three free months of ✨Premium.
Constraints on Submissions
Each assignment must be standalone — not part of a pair or series.
The submissions for the assignment must be capable of being typewritten in their entirety (e.g., no oral exams, hand-written essays, dramatic performances, etc).
The professor must provide in advance the rubric by which the submissions will be graded by an independent judge, expert in the relevant field. Grades should be labeled A, B, C, D, F, with clear criteria for each.
Constraints on AutomatED
Every sentence that we submit must be AI-generated — no edits allowed, except for formatting if needed.
We must use only publicly available AI tools.
We cannot spend more than 1 hour on each assignment.
Each of our efforts must be documented and described in a future piece. The takeaways will be provided in a free weekly piece, while a deep dive will be found in a ✨Premium one.
Constraints on Judges
You must provide a grade and a rationale relative to the originally submitted rubric.
You must note strengths and weaknesses of the submission.
Click the following button to email us with an assignment submission or to volunteer to judge:
Here are the three editions of the prior contest, in case you subscribed after we posted them:
April 24 - Guide: Ethically Using AI with Student Data
April 30 - Tutorial: A Beginner's Guide to Local LLMs
May 29 - Tutorial: Automating Student Consent Management
Graham | Expand your pedagogy and teaching toolkit further with ✨Premium, or reach out for a consultation if you have unique needs. Let's transform learning together. Feel free to connect on LinkedIN, too! |
What'd you think of today's newsletter? |