Teaching Like It's 2005?!

And other questions I've received in the last few weeks.

Welcome to AutomatED: the newsletter on how to teach better with tech.

In each edition, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

Last Week: I took a closer look at Google’s new Deep Research tool, which “kills” the research report like ChatGPT “killed” the essay. I showed how the tool works and how to access it. Then I shared two reports the tool produced for me, my evaluations of them, and my recommendations for what teaching professors should do in light of its power. Click here to read it if you missed it.

Today, I report back from the field: I share some of the good questions faculty have asked me lately, as well as what I said in response.

You may have some of the same questions, so I hope this is useful to you, but feel free to respond to this email to ask me other questions — I’ll include my answers in a later edition.

Remember, too, about my next public AutomatED webinar, which will be on January 31 on Zoom and will focus on techniques to build better AI tutors.

📬 From Our Partners:
 A Security-First Meeting Assistant

Automate your meeting notes

Get the most accurate and secure meeting transcripts, summaries, and action items.

Never take meeting notes again: Fellow auto-joins your Zoom, Google Meet, and MS Teams meetings to automatically take notes.

Condense 1-hour meetings into one-page recaps: See highlights like action items, decisions, and key topics discussed.

Claim 90 days of unlimited AI notes today.

Remember: Advertisements like this one are not visible to ✨Premium subscribers. Sign up today to support my work (and enjoy $500+ worth of features and benefits)!

❓ My Answers to 5 Faculty Questions

This year, I plan to keep you all more aware of what I’m doing and learning while helping faculty engage with and integrate AI.

Two weeks ago, I spoke to faculty at NOVA during the WONDER conference, then flew to Benton Harbor, Michigan to speak to faculty at Lake Michigan College. This past week, I went back to the same neck of the woods to deliver two hands-on sessions on prompting and AI usage to faculty at Southwestern Michigan College. I wrapped up my trip with a brief session with a pedagogy seminar for graduate students at Notre Dame.

(If you’d like to book me for this spring, or discuss options, click here.)

Here are 5 questions I got from the faculty I spoke with. I figure you might find my answers useful…

1. “What if some of my students can afford premium AI tools and others cannot? This is a serious equity issue!”

It’s a legitimate concern and there isn’t a silver bullet. Here are some thoughts.

While there are pricey premium tools like GPT-o1-pro (at $200/month), almost all tools either have comparable free versions or are $20/month (e.g. ChatGPT Plus, Gemini Advanced, Claude Pro).

The bright side is that the free tools are amazing — better than premium tools from ~6 months ago — and aren’t significantly worse than the paid ones for most tasks. 

This means students’ opportunities and access have increased in absolute terms — that is, for goods that they can accrue regardless of how much of them their peers accrue.

The dark side is that there are some key tasks, relevant in the university setting, where the difference is significant. GPT-o1-pro is significantly better than GPT-4o at anything complex, and crucially requires less prompting skill.

This means that students’ opportunities and access have decreased in relative terms, at least if they can’t afford what their peers can.

(Imagine, to make it stark, a course graded on a curve with only perfect-scoring students able to get As, where GPT-o1-pro access guarantees perfect outputs on the assessments but where GPT-4o is imperfect.)

One solution: instructors should try to leverage AI tools in their free versions and create assignments/assessments that don’t reward students who have the means to leverage the premium tools. This requires regular testing of the main tools’ capabilities but also course design that has key moments that are AI free — and those need to be the moments where everyone can shine, including with respect to their grades.

This is the same type of solution we deploy to mitigate other sources of unjust inequalities.

Here's an idea: […] the teacher can use the [most advanced] AI tool to generate a complete solution to "the problem" — whatever that is — and demonstrate how to do that in class. Give all the students access to the document with the results.

And then grade the students on a comprehensive followup activity / presentation of executing that solution (no notes, no more than 10 words on a slide). So the students all have access to the same deep AI result, but have to show they comprehend and can iterate on that result.

I think this is a good idea. It’s just like if you were to “share” access to an expert — access that not everyone has — and discuss/analyze what they told you with your colleagues or peers.

But everything is easier if your institution provides a high-quality baseline AI tool to all students. More on that below… 

2. “What’s the highest impact way to deploy an AI chatbot in my classes? The easiest?”

The best bang for your buck is a custom GPT. 

In essence, custom GPTs are modular or prepackaged AI chatbots that you can share via hyperlink. You do the modularization or packaging; you add meta-prompts or “instructions” and a knowledge base of files to enable them to simulate you (e.g. for tutoring or syllabus knowledge), your teaching assistants (e.g. for grading), or your students (e.g. for peer review or group work).

You spend $20/month for ChatGPT Plus (which is required to build them), you build the custom GPTs, and your students pay nothing to use them. (On the free plan, your students do face usage limits; these can be removed if they pay $20/month.)

You can read more about custom GPTs in my Premium Tutorial dedicated to them.

If you want to learn how to build better custom GPTs, I have created two resources for you beyond the Tutorial:

  1. Our new $25 course

    • Early feedback has been very positive, with one learning specialist saying that the price is “bargain basement” for what you get and with a professor describing it as “interesting,” “useful,” and “helpful.”

  2. My free-for-you webinar on Friday January 31 at noon Eastern on Zoom 

And you get a discount on the course if you purchase the webinar, for a combo deal.

(For more information on the webinar, see my email from yesterday, if you haven’t registered already.)

3. “Why isn’t it feasible for me to change nothing about my courses, which are ‘old school’ in structure and content (device-free, in-class pencil-only tests, lecture-heavy), despite the power of AI?”

It isn’t feasible and here’s why:

Imagine you are teaching your “old school” course and all of a sudden every student correctly expects to live their life with immediate access to high-IQ tutors, mentors, and consultants who know a whole lot about nearly everything, including your field.

Imagine, too, that within a few years, these “assistants” will be 10x more powerful and integrated with nearly all aspects of your students’ lives, careers, and environments.

That is, outside of a few contrived contexts where they can’t query, direct, and rely on these people, your students will be able to leverage them for a very low cost and will be expected to do so by their managers, family members, friends, etc.

You wouldn’t — and shouldn’t — leave your course unchanged in such a scenario.

You are in such a scenario, with AI.

So, change what you need to change.

(One compact case for thinking you are in such a scenario is as follows. The capabilities we are seeing or expecting from the latest models, whether GPT-o1-pro, GPT-o3 (soon to be released publicly), or Gemini 2.0, are genuinely impressive and go beyond the capabilities of most humans on most knowledge tasks. Personally, I’m finding o1-pro to be incredibly powerful in a range of use cases, and I am not alone — I’ll share other professors’ takes on its abilities in the coming months. Further, many of the past obstacles to AI success are being overcome, from context window limitations to hallucinations. Finally, the AI developers from OpenAI to Google are only gaining speed and resources, and they are repeatedly signaling that we are much closer to AGI and superintelligence than they expected to be at this point.)

4. “Which AI tool is best for <task X> in <my field>?”

You, expert, need to find out yourself by experimenting. I don’t know what high-quality outputs look like in your field, so I can’t tell you.

I can give you some initial guidance on where to look, though.

In descending order (internal to each category)…

For reasoning-heavy fields: 

For research-heavy fields:

For writing-heavy fields:

For coding-heavy fields:

With most of these tools, you’ll need to be able to prompt them effectively to get expert-level results. For that, I recommend the various ✨Premium Tutorials I have in the Archive dedicated to the topic (e.g. on prompting Gemini and long context more generally, or on prompting ChatGPT4o for image and data analysis), or wait until my next prompting webinar (later this spring).

(Yes, you really need to dive in and get your hands dirty with the tools, prompting them for — minimally — several dozen hours over the coming semester to better understand what they can do. To be completely frank, if at this point you don’t have significant experience testing the best AI tools for the tasks that play central roles in your field and life, you’re behind and need to catch up.)

✨5. “Which large language model (LLM) should my institution purchase for all students to use?”

Note: This rest of this section is visible to only ✨Premium subscribers. Thanks for your support!

2. ICYMI (“in case you missed it”): Google updated NotebookLM with three major changes: a new three-panel interface for better content management, interactive voice conversations with AI hosts during Audio Overviews, and a premium "Plus" subscription for power users that offers 5x higher usage limits. The Plus version will be included in Google One AI Premium in early 2025.

3. A new analysis in Lawfare argues that OpenAI's o3 model, which reportedly scored 87.5% on the ARC-AGI benchmark (above human performance at 85%), represents a fundamental shift in AI development. The authors say this unexpected breakthrough — jumping from 5% to 87.5% in months — proves powerful AGI requires immediate regulatory action. They call for proactive governance frameworks, global coordination, and economic preparation for widespread AI transformation.

  • A SharePoint agent that provides instant answers based on site content

  • A "Facilitator" agent for Teams that takes real-time meeting notes and summarizes group chats

  • An "Interpreter" agent coming in 2025 that provides real-time speech-to-speech translation in 9 languages

  • A "Project Manager" agent in Planner that can create and execute project plans

The agents require a Microsoft 365 Copilot license and are rolling out in preview to customers.

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.

Graham

Let's transform learning together.

If you would like to consult with me or have me present to your team, discussing options is the first step:

Feel free to connect on LinkedIN, too!