AI Literacy and "Technical" Knowledge

Plus, my upcoming webinar on using AI for feedback.

[image created with Dall-E 3 via ChatGPT Plus]

Welcome to AutomatED: the newsletter on how to teach better with tech.

In each edition, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I consider the relationship between knowing how to use AI effectively and technical knowledge. I also remind readers that sign ups are open for a December 6 webinar on using AI for feedback on student work.

💡 Idea of the Week:
What is AI Literacy?

I’ve been thinking a lot lately about AI literacy.

Here's a paradox of AI prompting I’ve been debating with myself: while it seems you'd need deep technical and field-specific knowledge to effectively use AI for challenging and complex tasks, sometimes you need none if you have more general skills.

With sophisticated enough prompting skills, general problem-solving abilities, and broad domain understanding, you can use AI to acquire the technical knowledge you need along the way.

And this includes technical knowledge of how AI works; often, you don’t need much of it to prompt AI effectively. (However, I will note that a lack of it can be very costly in some specific contexts, and sometimes you don’t know you’re in such a context if you lack this knowledge — yikes!)

In the past, I have argued that you almost always need pre-existing field-specific knowledge and skills to know when AI’s outputs are high quality. (See here for a free piece on this topic, and see here for a ✨Premium deep dive on it.)

I’m not saying I’ve completely changed my mind, but I’m less confident than I was.

The contours of this paradox have become clearer to me over the past few months as I have been developing an AI-powered department course scheduler that takes unstructured faculty, course, and room data as inputs and generates an optimal schedule from it.

I began with little knowledge of Python, linear programming, or optimization libraries that combine the two to create optimal faculty-to-course-to-room assignments. Although I have software development and IT experience, I’m a philosopher by training — analysis, synthesis, logic, and ethics are my strengths — not a computer scientist or mathematician.

What I do have are three crucial things:

  1. An advanced understanding of how to prompt LLMs and how to connect their inputs/outputs with information and other software

  2. A solid grasp of the problem space: the real-world constraints departments face in scheduling, the outcomes they desire, and — given user-submitted sample data (thanks, guys!) — the spreadsheets and data formats they have historically used to schedule manually

  3. A strong foundation in logical and conceptual thinking that enables me to map out the problem and solution space, addressing difficulties along the way

For instance, I could see that any viable solution would need to progress through several stages: raw data extraction, sorting, standardization, and only then optimization.

I could see that there would need to be meta-level error checking at various points, combined with looping to prior points in the workflow as needed.

I knew that optimization algorithms become very costly to run at certain orders of complexity, so the hard work to shortcut and reduce constraints prior to optimization would pay off later.

This logical scaffolding has helped me guide the AI toward teaching me the technical skills I needed, precisely when I needed them. My effectiveness hasn’t come from a wealth of pre-existing computer science knowledge. Instead, it stemmed from knowing how to break down complex prompts, maintain context across long exchanges, and iteratively refine outputs — techniques I've developed and shared in my free newsletter and in my ✨Premium Tutorials on advanced prompting.

When I needed to learn constraint programming, I didn't take a course; I engaged in focused dialogues with AI, using my understanding of both the scheduling problem's logic and LLM capabilities to acquire specific technical knowledge in context.

But here's a powerful counterargument: perhaps what I'm calling "advanced prompting skills" is itself a form of programming literacy — a kind of meta-technical knowledge that's just as demanding as traditional computer science expertise.

After all, the ability to break down complex problems, maintain structured dialogues, and guide AI through sophisticated technical tasks requires many of the same cognitive skills as software development.

Could it be that I'm not really bypassing the need for technical knowledge, but rather shifting it to a higher level of abstraction?

This tension raises questions about the future of AI literacy and expertise, the answers to which I am not confident about.

If I'm right, we should focus on teaching students and faculty how to be sophisticated AI communicators and logical problem decomposers, rather than loading them up with traditional technical knowledge.

But if my objector is right, we need to recognize that advanced prompting is itself a technical skill — perhaps even a new kind of programming paradigm — and adjust our curriculum accordingly.

Either way, one thing is clear: the traditional boundaries between "technical" and "non-technical" expertise are blurring, and our approach to education must evolve to reflect this new reality.

Perhaps the best analogy is to effective technical team management, a topic I covered months ago when I first thought more deeply about the parallels. A skilled technical manager doesn't need to know every programming language their team uses, but they do need to understand system architecture, problem decomposition, and how to communicate effectively with specialists. They rely on team members for implementation details while maintaining oversight of the larger solution space.

My interaction with AI on the course scheduler mirrors this dynamic — I provided the high-level direction and problem understanding, while the AI filled in technical details about Python syntax and optimization algorithms.

The key difference is that instead of managing human programmers, I'm "managing" AI systems through advanced prompting techniques.

This comparison suggests a third way forward: maybe the distinction between my view and my objector's is less important than understanding how AI is transforming the nature of technical work itself.

Just as the rise of high-level programming languages made software development more accessible without eliminating the need for deep technical knowledge, advanced AI prompting might be creating a new layer of abstraction in technical work — one where success depends on a hybrid skill set combining logical thinking, domain expertise, and sophisticated communication with AI systems.

The question for educators then becomes not whether to teach traditional technical skills or advanced prompting, but how to prepare students for a world where both approaches coexist and complement each other.

But let me know what you think via the poll below (note: I will include anonymized responses in my next newsletter).

What does AI literacy require?

Login or Subscribe to participate in polls.

📝✅ December 6th Webinar
on Feedback & Assessment

Given the positive response to our recent webinars, I will be hosting one final webinar for the year.

On October 4th, I hosted a webinar on how to use LLMs like ChatGPT as a professor. Just like my September 6th webinar on how to train your students to use AI, the feedback was very positive, with 100% of responding participants giving it an A (“Excellent!”) afterwards.

Here’s some of the feedback I received:

“Graham opened my eyes to how create better prompts utilizing his approach.”

“A well organized approach to an intimidating topic. I especially appreciated the depths of the prompting explanations. They brought a new level of understanding to the challenges of 'talking' to A.I. ”

Attendees at my Oct. 4th webinar

Let’s keep the party rolling!

You can check out the dedicated webinar webpage for more detail or sign up directly below, but here are the highlights.

Dates and Numbers

  • Date and Time: Friday, December 6th from 12pm to 1:30pm EST

  • Standard Price: $150

  • Early Registration Price: $75

  • Premium Subscribers’ Price: $60 (discount visible on webinar page for logged in users)

  • Early Registration Deadline: Monday, December 2nd at 11:59pm

  • Total Available Seats: 50

  • Minimum Participation: 20 registrations by Monday, December 2nd; if we do not reach 20 registrations by this date, all early registrations will be fully refunded and the webinar will be canceled/rescheduled

  • Money-Back Guarantee: You can get a full refund up to 30 days after the webinar’s date, for any reason whatsoever

What To Expect

  1. Live 90-Minute Interactive Webinar on Zoom:

    • Framework for evaluating when and how to use AI in feedback

    • Live demonstrations of feedback generation with ChatGPT, custom GPTs, and Claude

    • Practical strategies for maintaining student data privacy

    • Concrete examples of using LLMs as mentors, student simulations, and teaching assistants

    • Extended ethics discussion and Q&A session (with me, Dr. Graham Clay)

  2. High-Value Post-Webinar Resources:

1. ICYMI: Google has launched a new "Prompting Essentials" course to teach effective AI prompting in five steps, building on its AI Essentials course (supposedly Coursera's most popular AI course globally).

2. A new study found that AI-generated poetry is not only indistinguishable from human-written poetry, but is actually rated more favorably by readers. Participants performed below chance (46.6% accuracy) in identifying AI vs human-written poems, and were more likely to attribute AI-generated poems to humans. The researchers suggest this is because AI poems are more straightforward and accessible, making them easier for non-expert readers to understand and appreciate compared to more complex human poetry.

4. Michael Muthukrishna argues that large language models may solve three fundamental problems in behavioral science: poor theory building, unreliable measurement, and limited generalizability of findings. He suggests AI models could enable rapid testing of social science theories through "synthetic populations" before costly real-world trials — potentially revolutionizing how we validate teaching interventions.

6. OpenAI published a guide for students on writing with ChatGPT, emphasizing ways to use AI that enhance rather than shortcut learning. Key suggestions include using AI for citation formatting, getting background on new topics, channeling historical figures for deeper understanding (a topic I’ve covered several times before), and improving flow through iterative feedback. They recommend students generate shareable links to their ChatGPT conversations and include them in bibliographies — this is the most minimal thing I recommended in my more comprehensive reflection on how to cite AI from July.

7. A new randomized trial found that having access to GPT-4 did not improve physicians' diagnostic reasoning compared to using conventional resources like UpToDate. Yet, surprisingly, GPT-4 alone outperformed both physician groups (scoring 92% vs ~75%). The authors suggest effective AI integration requires better training in prompt engineering and workflow design rather than just providing access.

🧪 Call For Interest:
AI Course Scheduler for Departments

As I reported two months ago, I finalizing the beta version of an AI department/school/unit course scheduler.

It accepts as inputs the spreadsheets and narrative data departments currently use for faculty teaching assignments: course lists, faculty preferences, room details, etc. It then interprets these inputs approximately like a human scheduler would.

A standardized input form will be available but optional; I am assuming that most (human) department course schedulers don’t want to try to wrangle their faculty to use it and that they would rather dump in whatever data they currently gather for a human to conduct scheduling.

The system outputs faculty-to-course assignments, can suggest course adjustments within given constraints, and has the option to add a layer for course-to-room assignments.

If you’re interested in this AI tool, email me either to be notified when it's available (at a competitive price point) or to participate in beta testing at a discount.

Email me by responding to this email or clicking the below button:

And if you don’t handle scheduling for your department/school/unit but know who does, tell them about this to help them out!

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.

Graham

Let's transform learning together.

If you would like to consult with me or have me present to your team, discussing options is the first step:

Feel free to connect on LinkedIN, too!