Must Your Students Buy LLM Access?

Claude Pro would be the pick for most.

[image created with Dall-E 3 via ChatGPT Plus]

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I compare the paid plans offered by the creators of the main large language models as I discuss whether you should require your students to purchase access to them.

I also reflect on how to handle students who are confused about who is correct: you, the instructor, or their AI tool.

💸 Should You Require Your Students
to Purchase ChatGPT Plus or Claude Pro?

The decision to require students to purchase access to premier tiers of large language models (LLMs) is one that should not be taken lightly. While ChatGPT Plus (or Edu or Team), Claude Pro (or Team), and Gemini Advanced come with benefits, their cost isn’t trivial and mandating their purchase introduces a range of ethical considerations. You should weigh the educational benefits against the costs, ensuring that any required expenditures are justified by substantial pedagogical gains. This is especially true since your institution may provide access to GPT-4o via Microsoft Copilot or Gemini Advanced via Gemini for Workspace.

My Experience: This past semester, I required my students to purchase ChatGPT Plus to use the custom GPT tutors that I built for them, as well as to build custom GPTs of their own. (Be sure to check out my in-depth ✨Premium Tutorial on How to Build (Better) Custom GPTs if you are considering doing so.) I had never required my students to purchase anything before — I had always provided all course materials via Blackboard, Sakai, or Canvas, free of charge — so it was a tough decision.

But my students loved it, many told me that they thought the price was more than worth access to my custom GPT tutors, and none of them told me that they wished I hadn’t made it a requirement. However, as I discuss below, it isn’t clear if I would do this again, given that students can access custom GPTs without a paid plan now and given that Claude has comparable offerings.

Here are the main premium tier options, with reasons for and against:

Option

Reasons for

Reasons against

ChatGPT Edu/Plus

- Students can build their own custom GPTs (with Edu, they can share them within university workspaces)

- More reliable access to the latest model, GPT-4o, with fewer usage limits and improved language capabilities

- The latest model comes with Advanced Data Analysis, vision (best on market), Dall-E 2 image generation, web browsing, and file upload for every convo

- Enhanced privacy and data protection options come standard (with Edu, it is “Enterprise-level” and there are controls for security, privacy, and administration, like group permissions and GPT management; with Plus, you and your students will need to manually opt out of letting OpenAI use your data for training)

- Edu Cost: institutional basis

- Plus Cost: $20/month.

- Students can use custom GPTs, including professor-built ones, without it (although they cannot build their own)

- Standard usage limits are generally adequate unless the class focuses on AI use

- Data protection and privacy safeguards are less critical if students are properly educated about the risks and develop the sort of AI literacy

ChatGPT Team

- Everything in Plus, along with higher message limits, shared workspace for GPTs (this is already a feature of Edu but not Plus), and data excluded from training by default (this is already a feature of Edu but not Plus)

- Cost: $30/month

- Same as above

Claude Pro

- Access to Claude’s Artifacts feature, which shows real-time effects of prompt changes (useful for training students to prompt).

- Students can use and build Claude Projects, which are Anthropic’s custom GPT analogs

- More reliable access to the latest model, Claude 3.5 Sonnet, and the other top models from Claude 3 (Opus and Haiku), with fewer usage limits and improved language capabilities

- Higher usage limits

- Cost: $20/month

- Standard usage limits are generally adequate unless the class focuses on AI use, although this is less true than with ChatGPT — the limits on Claude 3.5 Sonnet are rather strict on the free plan

Claude Team

- Everything in Pro, along with higher usage limits, shared workspace for chats, and central billing and administration

- Cost: $30/month

Gemini Advanced

- Large context window — 1,500,000-token vs 200,000 for Claude Pro and 128,000 for ChatGPT Plus — for processing and analyzing more extensive data

- Upload a range of files (not available for Gemini otherwise, unless your institution has Gemini for Workspace)

- Edit and run Python code directly in Gemini

- 2 TB of storage fromm Google One

- Cost: $20/month

- The large context window is accessible for free via Google’s AI Studio (see below), making the paid version less justifiable

- Other features — reasoning, code generation, etc. — are on a par to or worse than GPT-4o and Claude 3.5 Sonnet

It is challenging to justify requiring students to purchase Gemini Advanced when its most distinctive and valuable feature — the large context window — can be accessed for free via Google's AI Studio, at least until Gems arrive. In fact, a larger context window of 2,000,000 tokens is now available there. I have discussed before how to leverage it, if you missed it.

Similarly, while ChatGPT Edu and Plus offer the capability to build custom GPTs, this feature is not necessary for many classes since professors will create most of the useful GPTs (tutors, coaches, interlocutors) for their students — and access to custom GPTs doesn’t require a paid plan anymore. Of course, if you want your students to build custom GPTs, and this is central to your course, then ChatGPT Edu and Plus are worth taking seriously.

For all other courses that require students to engage deeply with LLMs, Claude Pro presents a compelling option, especially with the recent release of Anthropic’s Console prompt generator and tester.

Although Claude Projects have more limited functionality compared to custom GPTs, they are generally sufficient for most educational use cases, as I will explain in a coming newsletter. Additionally, the cost of Claude Pro is the same as ChatGPT Plus, but it offers the added benefits of Claude 3.5 Sonnet, which is a better writer than ChatGPT, and access to the Artifacts feature, which is extremely useful for training students to use LLMs effectively.

Personally, this coming semester, I would be tempted to either require no premium tier of my students or go with Claude Pro.

Will you require your students to purchase an AI tool next semester?

Login or Subscribe to participate in polls.

✨Recent and Upcoming Premium Content

July - Tutorial: 3 Ways Profs Should Use ChatGPT (GPT-4o)

July - Tutorial: 3 Ways Profs Should Use Gemini (1.5 Pro)

💡 Idea of the Week: 
Navigating Instructor-AI Conflicts

This past semester, I faced an unexpected challenge with a student using a custom GPT tutor I had built for my philosophy class.

Despite my warnings that the GPT, though helpful, wasn't perfectly reliable or knowledgeable — indeed, we had also explored its flaws together, as a class — this student relied on it to produce work that she came to believe was flawless precisely because the GPT asserted it was flawless.

When I told her that the work was nonetheless flawed and that its flaws explained why it got a B, she got (mildly) angry, contesting that the GPT was correct because it said so, even when I showed her why it was not. This led to a discussion where she ultimately conceded that the GPT was mistaken and she deserved the grade I had given her, but she expressed annoyance and frustration that the tutor I provided was fallible (at least sometimes).

This incident underscored the complexities of integrating AI and highlighted the need for deeper AI literacy and better AI training.

AI training isn't just about having a technical understanding of how to use AI tools; it's also about recognizing their limitations and how to judge the quality of their outputs. While AI can offer valuable feedback and assist in many ways, it shouldn't replace the judgment of the student, much less an expert instructor. But this can be hard to understand for students who lack sufficient judgment to know when AI is producing high-quality outputs. This is why developing AI-independent judgment is crucial for developing skills using an AI tool — a topic I discuss at length in my ✨ Premium Guide on How to Train Students to Use AI (which I highly recommend if you plan to incorporate AI training this next semester — it is a very deep dive, with theoretical insights and practical solutions).

We need to teach our students to approach AI with a critical eye. This includes recognizing that AI-generated outputs should be one of many elements that they blend and synthesize into a finished product, not the definitive source of truth. They need to know how much of the AI’s outputs ought to be incorporated or retained. By fostering this critical perspective, we prepare students to navigate an AI-driven world where they can leverage these tools effectively and reliably.

The ultimate authority and responsibility for students’ work remains with them, even if they get help along the way from a book, a peer, or AI.

Yet, in the other hand, this situation with my student illustrated a tension that comes with the growing capabilities of AI. As AI continues to improve, it becomes more tempting — and more reasonable — for students to trust its outputs before they are able to judge the quality of these outputs. (Take it to the extreme: if AI were known to be infallible, then they would definitely be right to treat it differently than now.)

At some point in the coming years, I suspect a student will be right to be skeptical if I, their instructor, disagree with the AI. And I don’t think I can say “thank goodness this will happen after I retire.”

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.

✉️ What You, Our Subscribers, Are Saying

A Response to My Interview with Accounting Professor Jose Luis Ucieda Blanco

“It was reassuring to see someone's experiences that are comparable to my own. I'm a lot less enthusiastic about AI than Jose, but am trying to adapt and have had some successes and some flops. Most of the time when I read these newsletters I just feel overmatched and behind the curve.”

Anonymous Subscriber

Thanks for the feedback — I will try to be more mindful of how overwhelming everything AI-related is! Be sure to check out our 7-email primer series, discussed below, if you haven’t enrolled already…

A Response to My Interview with Accounting Professor Jose Luis Ucieda Blanco

“I'm currently enrolled in a course designed for university faculty to learn how to integrate AI into the classroom. I also happen to be in the age category described by Professor Blanco. I agree with his calculator example as technology that has been introduced to improve and speed up our work.”

Anonymous Subscriber

Are you going to train your students to use AI next semester/term?

“I teach a tax research class. We are going to use AI to research tax situations and then research the case and compare it to what the AI produced. I also created an AI tutor for my UG Individual Taxation class. I have to test it still to see how it is working.”

Anonymous Subscriber

🧰 Enroll in Our Free 7-Email AI Primer

Note: If you don’t know the answers to the below questions, you may want to enroll in our “Insights Series,” which I just finished updating this past week!

What’s the difference between Copilot Pro, Team Copilot, and Copilot for Microsoft 365?

What is zero-shot prompting?

What is “pairing” and why does it discourage AI misuse by students?

What are two ways to use AI for feedback and grading that carry no privacy, data, or — arguably — moral risks?

The Insights Series is a 7-email sequence for new subscribers that conveys who we are and some crucial information that I have learned and written about in the past year.

Like AutomatED, it is designed for professors and learning professionals who want actionable but thoughtful ideas for navigating the new technology and AI environment — and it will get you up to speed with AutomatED, too. (These emails come in a sequence on Thursdays.)

Every new subscriber to AutomatED is now enrolled in it by default, but you may have signed up before we released it… Sorry about that!

Still, you can enroll yourself manually by clicking “Yes” below. You can always unenroll starting in the second installment without unenrolling from AutomatED’s weekly newsletters.

Would you like to be enrolled in the AutomatED Insights Series?

Login or Subscribe to participate in polls.

Graham

Expand your pedagogy and teaching toolkit further with ✨Premium, or reach out for a consultation if you have unique needs.

Let's transform learning together.

Feel free to connect on LinkedIN, too!