AI Integration is Like Hiring

Also, a webinar on AI automations this Friday.

[image created with GPT-4o Image Generator]

Welcome to AutomatED: the newsletter on how to teach better with tech.

In each edition, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

Today, I take on another big topic: how to conceptualize the process of adding AI to your workflow, department/team, or institution/organization.

I also remind you about my webinar this coming Friday, which will concern Zapier.

📝 This Friday: 
Creating AI Automations with Zapier

This Friday, May 2, from 12 until 1:30pm Eastern, I’m going to cover how to use Zapier (and similar apps) to improve your AI-powered workflows and processes.

If you don’t know, Zapier is a no-code/low-code automation tool that enables you to connect any 2 or more apps together, from ChatGPT’s o3 model to Gmail to Microsoft Excel (and 5000+ more), in a workflow that is automated based on triggers, time periods, etc.

In short, it’s an easy way to get data and software, including AI, to talk to each other in a way that saves you time and headaches.

I would argue that Zapier — or another tool like it, such as Make or n8n — is one of the most important tools you learn to use in the AI era, on a par with LLMs like ChatGPT themselves. I save tons of time with it every week, and it was key to running AutomatED while teaching full-time.

As part of this webinar, I will convince you that you should use it (trust me, this won’t take long), before pivoting to demonstrating 3 simple ways to use it that can change your life as an educator:

  1. Responding to students’ queries and needs in a dynamic and timely way

  2. Meeting management — before, during, and after

  3. Grading and feedback — never do rote work again, and instead focus on what makes you special as an evaluator

These 3 demonstrations will show you general strategies for using Zapier that you can apply to whatever interests you in automating.

All registrants will receive the recording afterwards, even if they don’t attend live.

Webinar Details:

  • Cost: $25 ($0 for ✨Premium subscribers, like all webinars this year; discount code here, if you’re logged in)

  • Format: ~70% presentation, ~30% discussion (come ready with questions and comments)

👉👉👉

1. Anthropic launched Claude for Education, their education-focused AI offering (similar to Google's LearnLM) featuring "Learning mode" that uses Socratic questioning rather than providing answers directly. They've established partnerships with Northeastern, LSE, and Champlain College for campus-wide access, and are collaborating with Instructure to integrate with Canvas LMS. The initiative includes a Campus Ambassadors program and API credits for student projects.

2. Relatedly, Anthropic released a study on how university students use Claude, analyzing one million conversations from higher education emails. Key findings: STEM students, especially Computer Science (36.8% of conversations despite only 5.4% of U.S. degrees), are early adopters, while Business, Health, and Humanities show lower adoption rates. Most concerning for educators: Claude primarily completes higher-order cognitive functions like Creating (39.8%) and Analyzing (30.2%) from Bloom's Taxonomy — suggesting students may be outsourcing exactly the cognitive skills we most want them to develop.

3. OpenAI released o3 and o4-mini, their "smartest and most capable models to date," which can now use all ChatGPT tools (web search, file analysis with Python, image understanding, and image generation) while thinking through problems. Unlike standard models, these "reasoning models" are trained to think longer before responding, resulting in significant performance improvements in coding, math, science, and visual tasks. I have been especially impressed by o3 in my testing so far (it’s the model that has been powering Deep Research; previously, this was the only way to use it).

4. Anthropic launched "Research", a new capability that transforms how Claude finds and processes information. Unlike typical search, Claude conducts multiple searches that build on each other, determining what to investigate next while systematically exploring different angles. This complements their new Google Workspace integration, which connects Claude to your Gmail, Calendar, and Google Docs — enabling it to search emails, review documents, and consider calendar commitments when answering questions. For educators and students, this means Claude can now analyze learning materials, scan notes from past courses, search the latest academic research, and create personalized study plans. Available in early beta for Max, Team, and Enterprise plans in the US, Japan, and Brazil.

5. A fascinating new field experiment from Harvard with 776 Procter & Gamble professionals found that AI fundamentally transforms teamwork dynamics. The study revealed three key insights: (1) individuals with AI matched the performance of human teams without AI; (2) AI eliminated functional silos between R&D and Commercial professionals, helping both groups produce balanced solutions regardless of background expertise; and (3) AI's interface prompted positive emotional responses, partially fulfilling the social role traditionally provided by human teammates.

💡 Idea of the Week:
Reconceptualize AI’s Role

The Friday before last, I found myself in St. Petersburg, giving two presentations/workshops at the University of South Florida. The first session included local businesspeople and community leaders. What always strikes me about this sort of interdisciplinary session is what people — very different from me in expertise and background — find interesting or useful.

There was one topic that several businesspeople discussed with me afterwards: namely, my suggestion that we need to stop treating AI tools like software purchases and start approaching them as new team members we're bringing onboard.

Having spent the last year helping institutions, professors, instructional designers, and companies navigate their AI adoption journeys — some gracefully, others with considerable struggle — I've become convinced this mental shift is critical.

Let me walk you through why this matters and how it might transform your approach to AI integration, whether as an individual or as a member of a bigger organization.

Traditional Approach: AI as “Software”

Most of us have been part of a software acquisition decision-making process at some point. We create feature comparison spreadsheets, check for LMS compatibility, verify FERPA compliance, and wade through licensing terms. Or, as individuals, we Google options, skim user reviews, and seek expert-evaluated comparisons. (I think I — “cutting-edge AI guy” — just dated myself a little bit. Oops!)

When ChatGPT burst onto the scene, many institutions and individuals defaulted to these same evaluation processes.

This approach isn't entirely wrong — Claude, ChatGPT, and Gemini are, after all, software in some sense. And it made a lot of sense early in AI’s development, like up until 2023, perhaps.

But watching higher educators of many stripes struggle with implementation, especially as AI has become more intelligent and more tool-connected over these past two years (see above for two examples from Claude and ChatGPT), makes it clear that something crucial is missing from this framework.

Better Approach: AI as “New Team Member”

Here's the perspective that resonated with those USF faculty:

Today's AI systems — whether Deep Research, Operator, or more advanced setups like Google’s Co-Scientist — aren't just executing pre-programmed routines like traditional software. They're dynamically adapting to your specific needs/data, learning from feedback, and operating with increasing autonomy, much like a new colleague would.

Consider how differently we approach these scenarios:

Software acquisition: "Does this product have all the features we need right now? What's the annual cost? Is it bundled with something we are already buying? Who handles maintenance?"

Team member acquisition: "How quickly can they learn our processes? What information do they need to succeed? Will they work well with our existing team? How much supervision will they need? Do their working methods align with our department's approach?"

This second set of questions gets much closer to what actually determines AI implementation success in higher ed settings, here at mid-2025 and going forward.

It’s a version of the approach I first outlined from a different angle a little over a year ago in my newsletter titled “Will Students Need Management Skills as AI Develops?”

What This Means for Different Roles in Higher Ed

For Department Chairs: When faculty propose developing a department-wide AI tool for assignment feedback, don't just ask about cost and security. Ask: "Who will 'mentor' this AI on our department's assessment approach? How will we ensure its work aligns with our program learning outcomes? In what ways should our process be modeled on how we train teaching assistants and adjuncts?"

Just as you shouldn't hire an adjunct or task a graduate student with grading 100 papers and immediately turn them loose without orientation, training, or some sort of verification process, you shouldn't deploy AI without similar onboarding, either via native functionality, tool integration, or prompt engineering.

For Instructional Designers: You're often bridging faculty needs with technological capabilities. Instead of just showing professors how to use AI features, help them develop "training protocols" — collections of exemplars, rubrics, and style guides that help the AI understand their specific teaching approach — or use the latest AI tools that do this for them.

For Individual Faculty: Think about the "working relationship" you want with AI tools. Do you want ChatGPT to be your research assistant, teaching assistant, or content developer? Each role requires different "training" approaches, tool needs, and supervision levels. What would you tell a generalist human — smart but not knowledgeable about your domain or tasks — to get them up to speed? How would you check behind them? When would you decide to let them operate without oversight, or with less?

Upskilling Ourselves as "AI Managers"

This new paradigm requires us to develop skills more akin to personnel management than technical administration, as I predicted in my piece a year ago:

  • Effective onboarding: Creating clear documentation and examples that teach AI systems about your specific context and expectations, or integrating them with a team or environment that trains them on-the-go (this will become more and more common as AI has native access to our data).

  • Appropriate delegation: Identifying which tasks benefit from AI augmentation and which require human touch.

  • Ongoing supervision: Establishing review procedures that ensure AI outputs maintain quality and alignment with goals and values.

  • Performance evaluation: Developing metrics beyond technical functionality — Is the AI actually helping students learn? Is it reducing faculty workload or just shifting it?

Some of these skills aren't typically covered in standard faculty development workshops but are becoming essential as AI becomes more deeply integrated into our work.

I believe this perspective shift from "software" to "team member" represents more than just an implementation strategy — it's a fundamental reorientation that will determine which individuals, departments, and institutions successfully navigate the AI transition.

In the coming years, we'll see increasing autonomy in AI systems. Those treating AI as merely another software purchase will find themselves with powerful tools that nonetheless fail to integrate effectively into their teaching and research ecosystems. Meanwhile, institutions that have developed robust "AI onboarding" and "training" protocols will leverage these systems to enhance student support, accelerate research, and create more personalized learning environments.

Put simply, the most forward-thinking universities and organizations aren't asking "Which AI tools should we buy?" They're asking "How do we build effective working relationships with increasingly capable AI systems?" and “Who — not what — would I want in my work ecosystem, if I could have them on demand?”

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.

Graham

Let's transform learning together.

If you would like to consult with me or have me present to your team, discussing options is the first step:

Feel free to connect on LinkedIN, too!