The 4-Step Process to Teach Students AI
Avoid "harming" your students' learning with AI by getting clear on its role.
[image created with Dall-E 3 via ChatGPT Plus]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
In this week’s piece, I discuss a recent study on the impact of GPT-4-based tutors on student learning in mathematics, including what it means for those of us teaching our students to use AI. I provide a primer on the 4-step process that I recommend to professors who are considering how to teach their students to use AI.
Table of Contents
📈 A Change is Afoot
Before I get into today’s content, I want to share a bit about a surprising shift I have observed recently.
Over the past two years, I’ve given many presentations and webinars about AI at conferences and to a range of academic units. I’ve consulted with dozens of professors about AI integration.
In just the past six months, my conversations have shifted significantly.
The majority of professors now believe they need to take measures to prevent or discourage their students from misusing AI.
Before, my sense was that the bulk of professors had to be convinced that AI could do anything useful for practitioners or researchers in their fields, or they believed that they could reliably detect its use with only a glance.
The majority of the learning specialists are now focused on AI integration.
Before, they were focused on warning and guiding the subject matter experts — faculty in biology, literature, or sociology, say — about the potential for AI misuse and rules for handling it. Their attention now is on positive uses of AI.
The learning specialists are ahead of the professors but on the same timeline.
That is, I think the majority of professors will soon believe that, when it comes to AI in higher education, positive and productive uses of AI should be the focus. This goes for basically every field of study taught and researched in colleges and universities.
It’s not that AI misuse shouldn’t be avoided or discouraged; rather, it’s that most of our energy should be dedicated to leaning into the power of AI, especially since 71% of undergraduate students expect that it should be, 48% of these same students say they do not feel prepared for an AI-enabled workplace, and analysts (e.g. from Gartner and McKinsey) think that 70-80% of workplace tasks could be automated by AI to a significant degree by 2030.
I know I don’t need to convince you, my readers, of how important it is for our institutions to be sensitive to how they are developing and presenting degrees, programs, courses, and other offerings, given how they are perceived by stakeholders in society more broadly. In many places around the world and especially in the USA, higher ed is under a lot of pressure.
This brings me to the topic of the next AutomatED webinar…
⏩ Get Up to Speed with Our Sept. 6th Webinar
On Friday, September 6th, from 12:00 PM to 1:30 PM EDT, I will be hosting a Zoom webinar titled "Train Your Students to Use AI." This session is designed to equip college and university educators with effective strategies for integrating AI into their classrooms, regardless of their fields.
You can check out the dedicated webinar webpage for more detail, but here are the highlights.
Dates and Numbers
Standard Price: $150
Early Registration Price: $75
✨Premium Subscribers: Additional 5% discount on early registration
Early Registration Deadline: Saturday, August 31st (at 11:59pm)
Total Available Seats: 50
Minimum Participation: 20 registrations by Sunday, September 1st; if we do not reach 20 registrations by this date, all early registrations will be fully refunded and the webinar will be canceled/rescheduled
Money-Back Guarantee: You can get a full refund up to 30 days after the webinar’s date, for any reason whatsoever
What To Expect
Live 90-Minute Interactive Webinar on Zoom:
How to adjust and create course objectives for AI integration
How to design activities, assignments, and assessments that help your students achieve these objectives
Practical AI training techniques, with a focus on LLMs like ChatGPT, Gemini, and Claude
Ways to integrate AI ethics throughout one’s courses, not merely as a tacked-on unit
Q&A session (with me, Dr. Graham Clay)
Five Post-Webinar Resources:
Complete video recording and AI-generated summary
Complimentary ✨Premium Guide on Training Students to Use AI
Set of LLM prompts for AI integration
Flowchart for designing AI-integrated activities
Checklist for evaluating ethical implications of AI in courses
Below, as a bit of a condensed preview of the first half of the webinar’s content, I will discuss the 4-step strategy I recommend professors deploy to teach their students to use AI for field-specific tasks. I will also discuss new research that sheds light on the vital importance of the first few steps.
✅ The 4-Step Process of Productive AI Training
Last week, I wrote about how you can set up your students for success and avoid the headaches of AI misuse. Some readers wrote to me, expressing concern that I was overly focused on AI misuse.
“You're missing the point that AI is successfully being used in businesses around the world, and these students need to learn how to use it rather than being denied using it. Let's incorporate AI into our courses and teach them effective, efficient uses of it.”
I said to them: “I agree completely.”
However, I think all of us should believe that at least some uses of AI count as misuse. At the very least, if a student’s use of AI at a time undermines their ability to use AI later — because, say, they don’t learn the subject matter well enough to vet its outputs in more complex use cases — then surely this amounts to misuse (more on this issue below).
And since I think there’s something we can do about AI misuse — see last week’s piece for 3 strategies or our ✨Premium Guide for more — I think professors need to take action. Ideally, professors need to take action before each semester starts via adjustments to their course structure and incentives.
But these readers were right to press me to say more about positive and productive uses of AI.
In this section, I will do so as I briefly explain the 4-step process I recommend for professors who are considering training students to use AI positively and productively. Along the way, I will briefly review a recent study on the impact of AI tutors on student learning.
As I noted above, this will amount to a brief preview of some of the content of our webinar on September 6th, as well as that of my ✨Premium Guide on the topic.
Step 1: Evaluate the Role of AI in Your Field
The first step in effectively integrating AI training into your teaching is to thoroughly evaluate AI’s current and future role in your field. This involves assessing the extent to which AI can augment or enhance core tasks, processes, and outputs in your discipline.
A recent draft study — “Generative AI Can Harm Learning” by Bastani, Bastani, Sungu, et al. — provides useful insight into the considerations surrounding this sort of evaluation. (Its title is unfortunately presumptuous, for reasons that will soon become clear.)
In this study, the researchers deployed two versions of AI tutors relying on GPT-4: GPT Base (similar to the publicly available ChatGPT) and GPT Tutor (with safeguards modeled on the behavior of human tutors). These AI tutors were used in about 15% of the math curriculum across three grades in a high school.
While both AI tutors significantly improved student performance during AI-assisted practice sessions (48% improvement for GPT Base and 127% for GPT Tutor), the effects on unassisted exam performance were quite different. Students who had access to GPT Base performed 17% worse on subsequent unassisted exams compared to those who never had access to AI assistance.
Whether this is a bad thing — whether it amounts to AI “harming” learning — depends on a range of factors that may or may not be realized. For instance, it matters whether students will have access to AI tools when completing similar tasks in their future academic or professional lives, or whether they will need skills developed without the use of these tools when they use them.
In short: will early use of AI tools hinder students' ability to use them effectively in the future, or will it simply prepare them for a world where such tools are ubiquitous and they can rely on them?
As a professor integrating AI training in your courses, you need to consider whether the tasks you're teaching are likely to be augmented or replaced by AI in the future, and whether the fundamental skills you're developing will remain crucial for effective AI use. This turns, in part, on the reliability of current and future iterations of the tools in question, the complexity of the tasks involved, and other empirical factors.
Calculators are incredibly reliable at calculations but can complete vanishingly few complex math problems without guidance on what to calculate. Hence, students need to learn the fundamentals of mathematics despite the existence of powerful calculators. Is the same true of the AI tools relevant to your field?
In other words, if your students can reasonably expect to always have access to GPT Base and needn’t guide or doublecheck its future iterations’ outputs, why not just lean into training them to use GPT Base well? If your answer is in the negative, you need to point to a reason why this would “harm” their learning in the long run.
Step 2: Adjust Existing Course Objectives
After evaluating the role of AI in your field, the next crucial step is to review and adjust your existing course objectives.
The contrast between assisted and unassisted performance in the study discussed above emphasizes the need to thoughtfully modify course objectives when incorporating AI training. While students showed significant improvement during AI-assisted practice sessions, those who used GPT Base performed 17% worse on subsequent unassisted exams. This result can be seen as indicative of a failure, but it might also be seen as expected, if one’s objective is to teach one’s students to use tools like GPT Base effectively.
That is, one option is to have your course objectives concern skills or knowledge that would be best assessed or tested by unassisted exams, while another option is to have course objectives and corresponding assessments that incentivize students to lean into their skills with AI tools.
I recommend categorizing your existing objectives based on how AI affects them.
Some objectives may need to remain independent of AI influence to ensure students develop core competencies. For instance, in the math context of the study, objectives related to understanding underlying mathematical concepts or developing problem-solving strategies might need to be preserved or even emphasized more strongly. Other objectives might be significantly enhanced by AI integration. For example, objectives related to checking work or exploring multiple solution methods might be augmented by AI tools.
Step 3: Consider Adding New Objectives
After adjusting existing objectives, consider whether you should add new objectives specifically related to AI use.
One key area for new objectives is developing students' theoretical understanding of AI.
In the context of the above study, this might involve helping students understand how large language models like GPT-4 work, their capabilities, and their limitations in mathematical problem-solving. The study found that GPT Base gave a correct answer only 51% of the time, on average. Understanding these potential pitfalls is crucial for effective AI use. New objectives could focus on teaching students to recognize when AI might be giving incorrect or misleading information.
Another important area for new objectives is developing students' practical proficiency with AI tools.
The study showed a significant difference in how students interacted with GPT Base versus GPT Tutor. Students using GPT Base often simply asked for answers, while those using GPT Tutor learned to interact more substantively with the tool over time. New objectives could focus on teaching students how to craft effective prompts, how to engage in meaningful dialogue with AI tools, or how to use AI as a complement to their own thinking rather than a replacement for it.
New objectives could also focus on helping students understand the ethical implications of AI use in your field, including issues of fairness, privacy, and the potential for AI to exacerbate or mitigate existing inequalities.
Step 4: Practical Implementation
The final step in the process is to build towards the achievement of your adjusted and new objectives with in-class activities and out-of-class assignments that seamlessly weave AI training into your course, along with corresponding assessments.
One strategy is to start with in-class demonstrations of relevant AI tools. These can provide students with a firsthand look at how the tools function in real-time and their potential applications within your field. I recommend structuring these demos around some of the preparatory course objectives you devised in steps 2 and 3, showcasing specific features, functionalities, and use cases aligned with your course content. Encourage active participation by incorporating Q&A sessions and allowing students to explore the tools alongside you.
In my courses in Philosophy, I have found group activities centered around AI exploration and application to be particularly effective. These might range from brainstorming sessions where students collectively identify potential AI use cases within your field to collaborative projects where they work together to leverage AI tools for specific tasks. To ensure active engagement from all group members, consider assigning distinct roles and responsibilities within each group and included peer feedback and evaluation components.
Individual in-class activities are also crucial for providing students with dedicated time for hands-on experimentation and personalized learning with AI tools. These could include guided tutorials where students work through pre-designed exercises, or more open-ended taks that encourage them to explore the tools' functionalities independently. Consider activities such as prompt analysis (assessing how students craft effective prompts for AI tools), output evaluation (evaluating students' ability to critically analyze AI-generated outputs), integration projects (assigning tasks that require students to integrate AI tools with other course content or other technologies), and reflective writing (having students reflect on the benefits and limitations of AI use in your field).
The above study highlights the importance of monitoring and assessing the impact of AI implementation on student learning. The researchers found that students' perceptions of their learning with AI often didn't match their actual performance. Several professors have reported to me in consultations that they have had significant issues with this problem, especially when they have taken a more laissez faire approach to AI implementation in their courses.
For out-of-class assignments, most professors need to design both AI-inclusive and AI-exclusive tasks. AI-inclusive assignments might involve using AI tools for research, analysis, or creative projects, while AI-exclusive assignments ensure students can perform essential tasks independently (this goes back to the earlier steps). When designing AI-inclusive assignments, consider implementing safeguards to prevent misuse, such as requiring students to document their AI interactions with process logging.
Throughout the implementation process, it's crucial to provide ongoing guidance and support as students develop their AI skills. This might involve regular check-ins, discussions about effective AI use strategies, and opportunities for students to reflect on and refine their AI interactions.
I will cover this 4-step process at much greater depth at the webinar in two weeks time, with a heavy emphasis on Step 4 (~55 minutes of the 90 total), so be sure to register if you are working on AI training in your courses this semester or next.
📢 Quick Hits:
AI News and Links of Interest
1. Anthropic released LaTeX rendering as a “feature preview” for Claude, enabling it to display mathematical equations and expressions. (To enable this feature, click your email or account name on the left sidebar, click “Feature Preview” and enable it with the toggle.) They also released a screenshot button — next to the “Add content” attachment paper clip — that enables you to quickly show Claude what you’re looking at.
2. Google’s AlphaProof and AlphaGeometry 2 “solved four out of six problems from this year’s International Mathematical Olympiad (IMO),” putting them at the silver-medal standard. Along the way, the systems “solved one problem within minutes and took up to three days to solve the others,” including “the hardest problem in the competition, solved by only five contestants at this year’s IMO.” (The AI’s solutions here.)
3. K12 teachers in a recent case study realized the most productivity gains from generative AI use when they also sought “input” from AI (e.g. “thoughts or ideas about learning plans”) as opposed to merely using AI for “outputs” (e.g. producing “quizzes or worksheets”). The authors report that “[t]he divergence among teachers seems to derive from different responses (attraction versus aversion) to the two capabilities of generative AI: the ability to generate content (create output) and the ability to learn something from its intelligence (provide input).” (Our Course Design Wizard is good for both!)
4. Before now, you needed to use Discord to access the image creator Midjourney. Now you can go on their website and generate up to 25 images for free without signing up (more details on the process here).
5. OpenAI has just launched fine-tuning for GPT-4o, enabling users to provide custom datasets to refine the base model for higher performance. 1M training tokens per day are also free.
What'd you think of today's newsletter? |
With the semester approaching — or already here, as the case may be — I am updating some of our ✨Premium pedagogy Guides. Here’s the schedule:
August 21 - AI Misuse Guide, AI Policy Guide
I finished updating both of these last week, so be sure to check them out if you haven’t already.
This week, I will release an updated version of the following:
August 28 - AI Assignment Guide
Remember: accessing the Premium Archive — with all 15 Guides and Tutorials, plus our 3 Premium-only GPTs — and the next year’s worth of new pieces will cost $99 starting on September 1, so be sure to lock in the current price today if you haven’t already.
Graham | Expand your pedagogy and teaching toolkit further with ✨Premium, or reach out for a consultation if you have unique needs. Let's transform learning together. Feel free to connect on LinkedIN, too! |