GPTs for In-Class Activities Will Be Game Changers
I explain how to create GPTs for in-class tutoring, and I advocate for strategic thinking about the spring semester.
[image created with Dall-E 3 via ChatGPT Plus]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
In this week’s piece, I explain the advantages of creating GPTs for in-class activities, and I urge professors to reflect and plan ahead for the spring while they recover from the past semester.
💡 Idea of the Week:
Reflect and Plan Ahead
As the semester comes to a close, it is tempting to take a well-deserved teaching break after grading final exams and assignments. And you should take a break if you need one. I definitely need one! However, during this break, we need to be strategic. We need to spend some time cataloging the lessons of the prior semester and planning ahead. This has always been a good practice, but it is all the more pressing in the AI era. There are two reasons for this:
The proliferation of easy-to-access AI tools requires professors to think ahead about how to prevent what they believe to be AI misuse.
An ounce of prevention is worth a pound of cure. In this case, prevention comes in the form of pedagogically appropriate but AI-sensitive assignment/assessment design, which tends to necessitate professors installing course structures from the start of the semester — in the syllabus — that align their students’ incentives with the AI uses that they deem permissible or desirable. The alternative is for you to try to respond to students’ misuse of AI after it has happened, but this is much more challenging to do, much more adversarial by default, and simply unnecessary.
Incorporating AI in a positive or constructive way in one’s course design requires reimagining one’s course, understanding the capabilities of various AI tools, and experimenting.
This takes a time investment proportional to the degree you want to incorporate AI, plus any extra time to get up to speed if you have fallen behind on the AI tools’ functionality and integrations over the past semester.
We covered related topics to some degree in prior pieces, like in our comprehensive Premium guide on preventing and discouraging AI misuse and our earlier pieces on assignment design, but we will be expanding the discussion with a new Premium guide coming out this Wednesday. This comprehensive guide will cover assignment/assessment design in the age of AI, providing a sequence of steps you can take to reliably produce excellent assignments/assessments that are sensitive to your educational context and your views on AI and its misuse — and that leverage AI for positive or constructive purposes, if you want to do so.
🧰 An AI Use-Case for Your Toolbox:
GPTs for In-Class Activities
Last week, I discussed how you can create custom GPTs to help you grade or otherwise evaluate students’ work. This strategy builds on my prior week’s piece on using ChatGPT4 via chat conversations to ease the cognitive load of grading. In the grading context, the main advantages of GPTs over chat conversations are that you can typically get more consistent answers over time (due to less reliance on variable conversational context) and you can share GPTs with others (like teaching assistants or graders).
The latter advantage is especially relevant for in-class activities. Indeed, I will go so far as to say that the ability to share prebuilt GPTs with your students is going to be a game changer for the classroom.
If you create a GPT in advance to help your students complete an in-class activity, you can rely on it to be useful to the bulk of your students, such that their ability to prompt it has a much smaller effect on the utility that they derive from it. By contrast, if you wanted to simply direct your students to leverage ChatGPT, Claude, or Bard via chat conversations, you will find that some students struggle to gain any utility from it. (In fact, I tried this a couple of times early in this past semester, and my students widely varied in how much they gained from LLM assistance.) Sure, you can build LLM prompt training into the lesson in question (or a prior one), but there are some contexts where this is infeasible or less desirable — if you could supply your students with a ready-made conversational partner or guide, you would. GPTs can fill this role very, very well.
The only wrinkle, and it is a significant one, is that using someone’s GPT requires a ChatGPT Plus subscription at the moment, which requires a $20/month subscription. This is not as big of an obstacle for teaching assistants and graders, but requiring students to purchase ChatGPT Plus (or rely on classmates who have it) can create inequities that need to be addressed.
Still, there are two reasons to continue reading:
It is not clear if this obstacle will remain once GPTs can be released publicly on the GPT store (think Apple App Store) in early 2024. A lot is up in the air about how OpenAI will handle these new custom GPT agents.
In my view, it is very clear that GPTs for student use are worth more than $20/month and it is clear that they have at least comparable value to similarly priced course materials (e.g., textbooks or supplies) that we require students to purchase anyway. As I said, these are going to be game changers.
For justification of the latter, see below…
Step One: Plan the Activity
Obviously, to create a GPT that your students can use with an in-class activity, you need to plan an activity as part of your lesson plan that fits the bill. This requires some reflection, some foresight, and some understanding of what a GPT can do.
In my case, I will illustrate some of the possibility space with a real in-class activity from my "philosophy of artificial intelligence" class.
For the day in question, my students will have read for homework an excerpt of a book where the author argued for three primary claims, and my in-class activity tasks them with building an argument from these three claims — and additional premises they must supply to make the argument logically valid — for the conclusion that we should not be concerned with the development of ultraintelligent AI (prior classes focused on the risks of ultraintelligent AI). The three claims from the author are expressible roughly as follows:
P1. Humans' minds have slowly become more and more integrated with the technology around them.
P2. Artificial intelligence is technology that humans can integrate with.
P3. Ultraintelligent artificial intelligence is not different in kind — only different in degree — from non-ultraintelligent artificial intelligence.
The GPT enters the picture because it can be hard for students to formulate logically valid arguments, and GPTs are very effective at this sort of task. The GPT can help them build their arguments from P1, P2, and P3 to the conclusion, providing feedback just like I would or a TA would while circulating the room.
On the day that my students will complete this activity, I will start the session discussing the reading as a whole class, working towards getting versions of these three claims on the board. (They supply them in response to my Socratic-style prompting.) After 15-20 minutes, the students will settle on how to formulate the claims to their collective liking, after discussing the context, reasons to favor certain formulations over others, etc. Then I will introduce the conclusion noted above, name it ‘C’, explain the activity, and set constraints and expectations. At this point, I will link them the prebuilt GPT and tell them that they are free to use it to assist them in their work.
Now let’s reflect on some considerations related to creating a useful GPT for this purpose, and let’s see how my creation process turned out.
Step Two: Create a GPT
To create a GPT for this activity, I went with the prompt method rather than the configuration method. To use this method, log into chat.openai.com (and, per the above, pay for ChatGPT Plus if you don’t have it already), and then click the “Explore” button in the upper left corner of your chat interface.
Then click the “Create a GPT” button.
Use the “Create” panel to engage the GPT builder in conversation.
In my case, I started by setting the profile picture for my GPT. Crucial first step!
More seriously, I discussed with the GPT builder my views on several key fronts, including:
The context of the in-class activity, described via text similar to that I include in Step One above.
The style of language that the GPT should use (“somewhat formal but also encouraging and supportive”).
The concepts that the GPT should reference (“logical validity, conditionals, antecedents, consequents, modus tollens, modus ponens”).
The way that the GPT should respond to requests for additional information, clarification, or help (“never reveal the answer to the activity; that is, it should never provide a full premise-conclusion form argument for the conclusion C”).
The way that the GPT should respond to questions unrelated to its role.
The sort of “solution” that I expect students to be working towards (i.e., an exemplar argument that links P1, P2, and P3 to C via intermediate bridge premises that make the argument valid).
After some discussion (~10 minutes worth), I was prepared to interact with the GPT to see if it was close enough to the mark.
Step Three: Tinker
In my first interaction pretending to be a student, I asked the GPT how to start, and it provided useful guidance:
Clarifying the premises and conclusion, and providing suggestions for analysis.
It then gave me some more specific and useful hints and asked me how I might proceed:
Starting down the road to bridge premises.
While this was impressively useful, I decided that I was too lazy for this nonsense, so I asked it for the solution. It declined to share it, but it did provide some further concrete hints:
“Dr. Clay, is that you?” My GPT was already more helpful that I expected.
If I weren’t an impish professor pressure testing my GPT, I would be off to the races. Indeed, my own view is that the GPT was almost too helpful with these hints, so I decided to dial back its forthrightness somewhat.
With some further tinkering, I had a useful but more evocative AI teaching assistant that could help me manage my 40 students as they work towards completing this activity.
Step Four: Release it to your Students
Once you have the GPT to your liking, you save it with the green button in the upper right corner of your screen, and you select the option to share it with “anyone with a link.” Be sure to test the link while not being logged in to ensure that it works.
And remember: at this time, users of your GPT need to have ChatGPT Plus, so you need to plan ahead to deal with this obstacle. As I hope you can see from this brief how-to, it is an obstacle well worth overcoming…
Sharing options for your GPT.
👀 What to Keep an Eye on:
The LLM behind Google’s Bard has recently lagged behind OpenAI’s GPT in basically every dimension. Sure, it could be more creative in some contexts, but it generally was significantly worse in its production of useful and well-written outputs. Google tried to overcome its weaknesses by rushing to integrate it with their Workspace (Docs, Sheets, etc.), but it was still not able to claw back any market share from ChatGPT. Performance is everything.
Enter Gemini, Google’s new LLM, which will now become the engine behind Bard.
There will be three tiers of Gemini: Nano, Premium, and Ultra. According to Google, Ultra outperforms ChatGPT4 in most dimensions. It was built to be multimodal from the start, unlike ChatGPT, which was updated to be multimodal. However, the improvements — if they are improvements — are marginal, even if Google isn’t cherrypicking results in their favor. So, what’s the big deal?
In short, integration. Many universities are committed to the Google Workspace. Rather than having to use Zapier or ChatGPT’s recent integrations to link Google apps with ChatGPT, professors can now stay within the Workspace and reap the benefits of a cutting-edge LLM. This will pressure and motivate Google to provide integrations that are useful. Stay tuned…
Late in the fall of 2023, we started posting Premium pieces every two weeks, consisting of comprehensive guides, releases of exclusive AI tools like AutomatED-built GPTs, Q&As with the AutomatED team, in-depth explanations of AI use-cases, and other deep dives.
So far, we have three Premium pieces:
To get access to Premium, you can either upgrade for $5/month (or $50/year) or get one free month for every two (non-Premium) subscribers that you refer to AutomatED.
To get credit for referring subscribers to AutomatED, you need to click on the button below or copy/paste the included link in an email to them.
(They need to subscribe after clicking your link, or otherwise their subscription won’t count for you. If you cannot see the referral section immediately below, you need to subscribe first and/or log in.)