Reducing the Cost of Learning "Reps" with GPTs

What can we learn from Google Labs' games?

[image created with Dall-E 3 via ChatGPT Plus]

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, Google Labs’ AI games inspire me with ideas for custom GPTs, I share the link to our course design wizard GPT, and I announce our “build a GPT” webinar.

💡 Idea of the Week:
Use AI to Reduce Cost per Learning “Rep”

A recent piece from Marc Watkins drew my attention to something from Google that I somehow missed or forgot — namely, Google Labs’ “experiments with the future of AI.” Here is Watkins on why these experiments matter to educators:

What’s […] fascinating is how Google is using games to foster creative exploration with this new technology and how easy it is for K12 or higher education folks to adapt and pair such activities with critical reflection to help students not only understand how parts of the underlying tech functions, but also help them build their knowledge about music, language, and the arts.

Marc Watkins, “Google's Creative AI Experiments Provide Vivid Learning Experiences”

But what sort of games are we talking here?

“Say What You See” helps you “learn the art of image prompting with the help of Google AI.” More specifically, it gives you images and you try to write the prompts that generated them — you win when the image generated from your prompt sufficiently matches the image you were given.

You aren’t left to simply trial and error different prompts. The game also coaches you along the way to help you learn how to improve at image prompting.

Here was my first attempt:

And here is what happened:

Close but a bit wonky… See if you can do better!

There are other useful educational games from Google Labs (like “Odd One Out”, which is designed to test users’ ability to distinguish AI-generated images from others), but most courses at the university level cannot directly leverage them. At bottom, these games are too basic and too generic. However, we can gain from reflecting on what makes them successful.

These games’ strength is that they combine coaching with trial-and-error hands-on experience — and the experience is with using or evaluating an AI tool. Perhaps we can make similar games for our more advanced use cases in the university setting, whether or not we are training students on their AI-related skills…

Indeed, custom GPTs, which OpenAI released in November and we have covered before, allow us to give our students more repetitions (“reps”) at learning tasks. In effect, these GPTs can radically lower the cost per rep, if we design them properly.

For instance, consider language learning courses. A custom GPT could present real-world scenarios in the target language — like a shopkeeper in New Delhi asking you what you would like to purchase or a taxi driver in Lima telling you about his recent wedding — requiring students to respond appropriately. The GPT would provide instant feedback on their grammar, vocabulary, and cultural sensitivity. This approach mirrors immersive language exposure, offering students a safe space to practice and improve their language skills without the pressure of a real conversation.

Another application could be in philosophy classes. Here, students could explore ethical dilemmas by interacting with a custom GPT representing different philosophical systems or schools. In this game, students would be presented with complex scenarios and must navigate through them by deploying and defending their own philosophical principles. The GPT could be customized to challenge their views from the perspective of different moral theorists — utilitarians, say — prompting them to justify their reasoning and consider strong pushback from a well-versed “partisan” who might disagree with them. This interactive approach not only deepens their understanding of ethical theories and hones their argumentation skills but does so in a low stakes but realistic way.

These examples illustrate how we can adapt the principles of Google Labs’ educational games to more advanced settings with custom GPTs. These tools can make learning more interactive in a way that lowers the cost of the trial and error that always lies on the path to mastery.

🧰 An AI Tool for Your Toolbox:
AutomatED’s Course Design Wizard

We are excited to announce that our very own custom GPT — the AutomatED “College/University Course Design Wizard” — is now available for use for free by anyone with ChatGPT Plus via the GPT store, which OpenAI released last week.

In effect, it is an AI colleague that embodies and expresses good pedagogy, with a large dollop of expertise with AI-related issues in teaching. It is instructed to help you either integrate AI or exclude it effectively, avoid data privacy mistakes, and express your stance on AI to your students.

It relies primarily on our own research (like our Premium guides, linked at the bottom of this piece) and a range of exemplar assignments and course policies we have developed. But it also draws from our analyses of pedagogy experts’ views (like those Zaretta Hammond develops in “Culturally Responsive Teaching and The Brain” and those the National Research Council express in “How People Learn”) and on well-known pedagogical frameworks like Universal Design for Learning (UDL).

Two weeks ago, we released it to our Premium subscribers, and they have found it very useful. We have improved it and expanded its features based on our Premium subscribers’ perspectives, and we plan to continue to polish it and add features in the coming months.

Here is one way you can use it…

A Use Case: Increase the AI Immunity
of an Assignment

As you can see from the above screenshot, four conversation starters are presented at the outset for you to select, if you want.

One is “I want to make a pre-existing assignment AI-immune.”

Selecting it sends the GPT down a sequence of steps to get more information from you about your educational context, the specifics of the assignment you are envisioning, the profile of your students, and your pedagogical preferences. It tends to work on these in order — assuming you don’t ask it to operate differently — by asking clusters of questions, like the following about my educational context:

In my case, I gave it the following (somewhat) generic information about a political philosophy assignment I had in mind:

The pre-existing assignment is a take-home essay. It is on political philosophy -- rights, liberty, and responsibilities in connection with citizenship in representative democracies. Students get to pick their own topic. It is 5 pages, double spaced. The primary learning objective is to improve students' ability to establish a philosophical thesis. To achieve this objective, students need to develop premises and form them into a valid argument, as well as defend them against an objection that they develop (this is a requirement). I am worried that students will simply use ChatGPT or Claude or Bard to write their essays, or at least use them for the core intellectual work.

My (somewhat) generic response.

At this point, the GPT will generally ask further questions, like the following:

Or it will give you suggestions:

The AutomatED GPT’s suggestions.

Whether the GPT asks more questions or not depends on how much information you share at the start. Of course, you can always add more context — including by uploading files, which it will urge you to ensure are free of any personally identifying information — to get more specific responses.

Even without significant context, the output is very useful! For instance, the above suggestions conform with the broad sorts of advice we have provided in our many pieces on AI immunity (including our last attempt to crack an assignment with AI and our guide on discouraging and preventing AI misuse).

Obviously, there are other use cases for the GPT, like making assignments that require students to deploy AI tools or creating assignments to train students to more effectively use those same tools. The GPT consistently surprises me in its creativity and utility.

We have tried to make the GPT as flexible as possible — open to a range of professors’ perspectives on the role of AI in their course design, just like we are. Give it a whirl!

👀 What to Keep an Eye on:
“Build Your Own GPT” February Webinar

We are excited to announce our upcoming "Build Your Own GPT" webinar, a Zoom-based learning experience designed for professors, instructional designers, and anyone else who wants to incorporate custom GPTs into their pedagogy this year.

The February webinar — led by yours truly — will give a cohort of like-minded educators hands-on guidance in developing custom GPTs tailored to their specific needs. Whether you're looking to enhance student engagement, streamline course creation, improve your in-class activities, or empower subject matter experts, this webinar is your gateway to unlocking the full potential of custom GPTs.

By the end of the webinar, you can expect to have learned how to develop your own GPTs — but you will also walk away with one already developed!

The price will be $99. All Premium subscribers will get a 10% discount that we will make available in our Premium newsletters prior to the event.

Let us know below whether we should reserve a seat for you!

Would you like to attend our February AutomatED webinar on building your own GPTs?

Come learn how you can leverage this exciting new technology!

Login or Subscribe to participate in polls.

Glenda Morgan on Challenges to Online Learning

In the newsletter piece linked below, Morgan…

- Reflects on the significant developments in online learning in the U.S. in 2023, indicating that regulatory and research focus is shifting from for-profit models and OPMs to the broader realm of online education itself, with increasing scrutiny from think tanks and activist organizations.

- Highlights that online learning's challenges, such as transparency issues and unequal student support, are also prevalent in traditional on-campus education, advocating for enhancing online methods rather than defaulting to conventional approaches.

- Urges a focused effort on defining and improving online learning quality and reevaluating aggressive marketing strategies, calling for collaboration among educators, institutions, and regulators to advance the field.

Summary courtesy of ChatGPT4.

Philippa Hardman on AI and “Un-Personalised” Learning

In the newsletter linked below, Hardman proposes that…

- AI in education has predominantly emphasized personalized learning, drawing on Bloom's 2-Sigma research which advocates the effectiveness of one-on-one mastery learning.

- However, there is now a shift towards recognizing the substantial benefits of communal learning, focusing on collaboration and social interaction, which provide academic and socio-cultural advantages alongside personalized approaches.

- There is a need to explore AI's potential in enhancing not just personalized learning but also in optimizing and scaling communal learning experiences, thus achieving a balance between individual and collective educational needs.

Summary courtesy of ChatGPT4.

Lily Lee and Aditya Syam on AI Literacy

In the newsletter linked below, Lee and Syam…

- Discuss the growing importance of AI literacy in education, highlighted by a recent poll of theirs showing strong support for introducing AI literacy at various educational levels, with the highest preference for elementary school.

- Feature the Artificial Intelligence Literacy Act introduced in Washington, aimed at incorporating AI literacy into digital literacy programs across different educational settings and emphasizing its significance for national competitiveness and workforce preparedness.

- Outline resources for educators and students to enhance AI understanding, including interactive tools explaining AI concepts, in-depth mathematical explorations of large language models, and a collection of assignments from Harvard's meta(LAB) to engage students with AI tools.

Summary courtesy of ChatGPT4.

⬆️ How to Access Premium

Late in the fall of 2023, we started posting Premium pieces every two weeks, consisting of comprehensive guides, releases of exclusive AI tools like Betas of AutomatED-built GPTs, Q&As with the AutomatED team, in-depth explanations of AI use-cases, and other deep dives.

So far, we have released three Premium guides that are each worth a deep dive:

To get access to Premium, you can either upgrade for $5/month (or $50/year) or get one free month for every two (non-Premium) subscribers that you refer to AutomatED. Pssst… we expect prices to go up later this spring.

To get credit for referring subscribers to AutomatED, you need to click on the button below or copy/paste the included link in an email to them.

(They need to subscribe after clicking your link, or otherwise their subscription won’t count for you. If you cannot see the referral section immediately below, you need to subscribe first and/or log in.)