AI Can Do Anything at a Cost

Also, a webinar on AI grading and feedback this Friday.

[image created with GPT-4o Image Generator]

Welcome to AutomatED: the newsletter on how to teach better with tech.

In each edition, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

Today, I try to reframe your thinking around AI’s role in general and in higher education. When thinking about AI, it’s most productive to think not about what knowledge work AI can do, but rather about how much it would cost to get it to do what needs to be done.

I also share the sign-up link for Friday’s $25 webinar on AI grading and feedback (a popular topic I present on to institutions around the world).

📝 Save the Date: Webinar This Friday

This Friday (April 11th, 12-1:30pm ET), I'm hosting a Zoom webinar about all the different ways you can save time and improve your grading/feedback with AI.

Here’s what I'll cover:

  • Options that fit your views on the ethics of using AI for these tasks, whatever your views are

  • AI-driven feedback methods that don’t share student work with AI

  • Ways of grading student work with AI that overcome AI weaknesses and don’t violate students’ data privacy rights

  • How I help specific, real professors save time — and avoid headaches — with AI feedback/grading workflows

Webinar Details:

  • Cost: $25 ($0 for ✨Premium subscribers, like all webinars this year; discount code here, if you’re logged in)

  • Format: ~70% presentation, ~30% discussion (come ready with questions and comments)

👉👉👉

💡 Idea of the Week: Integration Effort

One of the most frequent misconceptions about AI in higher education is that AI models are incapable of truly advanced, specialized tasks. But in my experience — developing AI tutoring solutions for institutions, custom AI graders for individual professors, and sophisticated content with subject matter experts — there’s no knowledge work AI can’t at least assist with significantly.

The real sticking point is what I call “integration effort.”

Integration effort is essentially the work (or cost, in the broadest sense) required to prompt AI, give it the right context or data, connect it to the right software, and manage quality control so it can actually complete the task you have in mind.

For a simple filler assignment or routine email text, integration effort might be near-zero. But when you’re building a sophisticated AI tutor that needs specialized domain knowledge, a consistent “voice” over a whole course, and resistance to student jail-breaking, the integration effort can be substantial.

Below, I want to argue that while integration effort is currently a significant obstacle for some tasks, it’s decreasing rapidly and unlocking more and more tasks for AI each semester. And, crucially, I’m seeing that those who don’t invest in understanding, analyzing, and minimizing integration effort in their domains are lagging far behind.

AI Can Do (Virtually) Anything — If We Let It

Over the last year, I’ve worked with clients who were convinced that aspects of their work “just can’t be done by AI.”

One instructional designer I worked with believed that AI could never produce the nuance needed for an advanced philosophy of education module; then I showed them a multi-step prompting workflow using two models in succession (one for initial drafting, another for in-depth editing) and a third process for content validation. The result, produced with $5 of API calls and taking ~15 minutes, rivaled what human designers typically produce in weeks.

In my experience at this point in time in 2025, the barrier is rarely AI’s inability. Instead, the biggest challenge is the integration effort required to set everything up:

  • designing custom prompts

  • weaving in relevant references/information

  • developing a multi-step “flow” that chains the prompts together and handles quality assurance/errors/etc.

  • and training the humans (faculty, staff, students) to create, use, and modify these processes effectively.

Practically any knowledge task can be done, but prompting and process design is a skill in itself. When folks assume AI simply “can’t do that,” they often haven’t either (i) used the latest models or (ii) explored the flow-creation techniques that unlock the real power of these tools.

This leads me to my next point…

Determining Integration Effort is Key

Because integration effort can be the difference between massive success and total frustration, it’s critical to figure out — as early as possible — how much integration effort is needed for any valuable task.

This is especially true in higher ed, where time is a premium and academic units often move at a glacial pace, as we’ve all seen.

If you’re a professor who wants to grade 200 midterm essays in a few hours with the help of AI, you need to know how much time you’ll need to spend on prompt calibration, flow creation, etc. to handle edge cases and detect certain nuances. Is it worth it to do this, rather than simply committing to an AI-free method to grade them all over a week or two (like it’s 2009)?

It’s easy to miss these hidden costs. Yet, your approach to integration effort can determine how quickly you see real gains in efficiency and quality. If you ignore it altogether, you’re making decisions in the dark about which tasks AI can (or cannot) take off your plate.

Some Tools Make Integration Easier

AI development companies are trying to reduce friction.

NotebookLM from Google, for instance, can dramatically cut the time it takes to onboard your documents and have an AI agent refer to them accurately. That means you can set it up to read your syllabus, lecture notes, or student questions, and produce relevant responses without too much engineering.

Similarly, specialized tools like OpenAI’s “Operator” and “Deep Research” are being designed to reduce the headache of context management and tool integration, so you can minimize how much prompt engineering, tool connections, etc. you need to do.

Yet, from my vantage point, so many use cases in higher ed still require tailor-made solutions. Maybe you need advanced data analysis features connected to your LMS or your scheduling apparatus, or a system that can handle your institution’s unique compliance requirements.

At that point, the built-in functionalities of a prepackaged AI product can’t cover it — bespoke solutions (and their associated integration effort) might pay off. But you won’t know until you begin scoping out the task itself, per the above.

Many Are Behind on Integration Effort (and Skills)

In my consulting work, I’m consistently surprised by how many teams (from faculty to staff and admin) are months or years behind the current capabilities and best practices of AI.

Picture a department that’s still debating whether AI can write good multiple-choice questions for basic 100-level content (it can do very easily, regardless of subject), while, meanwhile, entire college systems are developing customizable AI tutors, structuring their unstructured advising data with AI, or using AI to turn out entire modules of HR training content in a day.

The main difference is that the latter group has recognized that success with AI depends on devoting time and resources to bridging that integration effort.

Don’t let your institution stay in a secure bubble of “We’re just not sure AI is up to it yet.” With the advent of the reasoning models and more powerful tools, the question is never “Can AI do the work?” so much as “What do we need in order to integrate AI effectively?” and “Is that worth it?”

These are the new baseline questions in the age of advanced AI.

Over Time, AI Itself Lowers Integration Effort

One of the most remarkable developments (even in the last six months) is how prompt adaptability and context management have become core features of new AI releases.

Just as we see with Deep Research, AI can begin to handle tasks like context retrieval and data pipeline set-up on your behalf, gradually collapsing its own integration effort. We’re seeing the earliest versions of AI that can chain together tasks, or fetch third-party data and tools on demand with minimal human guidance.

Eventually, many of the workflows we currently find “too involved” will become automated. That’s the horizon some call “frictionless integration,” where AI can do all of the background data wrangling, validation, and creative rearranging we currently manage ourselves.

At that point, the role of humans in the workplace will shift even more radically than it already is, and higher education won't be an exception. A system that can produce, cross-verify, and deliver tailored content in real time to thousands of students simultaneously will reorganize the entire educational infrastructure — how we staff, how we design curricula, and how we conceive of knowledge transmission itself.

It may feel futuristic, but it’s not as far away as you might think. And, even if you think I’m wrong about this, proving it would require you to join me in trying to figure out the relevant level of integration effort. That’s my point.

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.

Graham

Let's transform learning together.

If you would like to consult with me or have me present to your team, discussing options is the first step:

Feel free to connect on LinkedIN, too!