6% of Faculty Feel Supported on AI?!
Plus, a webinar on building AI tutors this Friday.
[image created with Dall-E 3 via ChatGPT Plus]
Welcome to AutomatED: the newsletter on how to teach better with tech.
In each edition, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
Last Week: I shared some of the good questions faculty have asked me lately when I visited their campuses for AI pedagogy/productivity presentations, as well as what I said in response. The questions ranged from “What if some of my students can afford premium AI tools and others cannot?” to “Which large language model (LLM) should my institution purchase for all students to use?” Click here to read it if you missed it.
Today, I discuss some striking findings from the Digital Education Council’s just-released Global AI Faculty Survey. If you have any influence over your institution’s AI literacy trainings and resources, you’ll want to pay attention to this…
Remember, too, that my first AutomatED webinar of 2025 is this Friday from 12-1:30pm Eastern on Zoom. I'll dive deep into creating AI tutors using various approaches including custom GPTs, Google Gems, LearnLM, and APIs. You can register here for $25, which includes a discount code for our custom GPT course (free for ✨Premium subscribers; discount code here if logged in). While basic familiarity with prompting and custom GPTs is recommended (and available through the course), this webinar will focus on advanced implementation strategies and common pitfalls in AI tutoring. The session will be 65% presentation and 35% discussion, and the recording will be available for those who cannot attend live.
Table of Contents
📬 From Our Partners:
An Assistant for Every Prof?!
Save 1 hour every day with Fyxer AI
Organizes emails so important ones are read first.
Drafts replies in your tone of voice.
Takes notes, writes summaries, drafts follow-up emails.
Remember: Advertisements like this one are not visible to ✨Premium subscribers. Sign up today to support my work (and enjoy $500+ worth of features and benefits)!
💭 What Do Faculty Think About AI?
The Digital Education Council just released their Global AI Faculty Survey of 1,681 faculty members from 52 institutions across 28 countries, and the findings are eye-opening. (Click here if you missed their analogous survey of students.)
While 86% of faculty see themselves using AI in their future teaching [p. 21], only 6% strongly agree that their institutions have provided sufficient resources to develop their AI literacy [p. 35].
This is a concerning gap between the recognized power of AI and institutional support, and it's a clear signal about where higher education needs to focus in 2025.
Speaking with faculty about AI around the world, I've seen this firsthand. But let's dig into the survey’s findings.
The Regional Picture
Before getting into individual issues, I found interesting some of the regional variations on broad stances on AI. For instance, faculty viewing AI as an opportunity vs. challenge varies significantly by region [p. 13]:
Latin America: 78% opportunity / 22% challenge
Asia-Pacific: 70% / 30%
Europe/Middle East/Africa: 65% / 35%
USA & Canada: 57% / 43%
Likewise, interest in using AI in teaching in the future varied along parallel lines:
Page 21 of the Digital Education Council Global AI Faculty Survey 2025
Faculty Tool Use Outstrips Support
Setting aside regional variations, there’s a noticable gap between faculty using AI tools and using them regularly with expertise:
61% of faculty have already used AI in teaching [p. 6]
75% of AI-using faculty use it to create teaching materials [p. 7] (see here and here for some of my content on this topic)
50% of AI-using faculty use it for teaching students to use and evaluate AI in class [p. 7] (see here for one of my pieces on that topic)
Yet 88% of these users report only "minimal to moderate" use [p. 8]
And 40% of faculty identify as complete beginners or have “no understanding” of AI, and only 17% consider themselves "advanced" or "expert" in AI proficiency [p. 16]
Why the gap? Well, one explanation is that faculty lack institutional support.
The survey reveals that…
80% of faculty don't find their institutional AI guidelines comprehensive [p. 32]
80% say their institutions haven't made clear how AI can be used in teaching [p. 33]
The top barrier to AI adoption, at 40%? "I don't have time or resources to explore AI" [p. 9]
The second-highest barrier, at 38%? “I am not sure how to use AI in my teaching” [p. 9]
Faculty’s Concerns About Students
Looking at what keeps faculty up at night when it comes to students using AI…
66% agree that incorporating AI is necessary to prepare students for future job markets [p. 22]
83% are concerned about students' ability to critically evaluate AI outputs [p. 29]
82% worry about students becoming too reliant on AI [p. 30]
54% believe current student evaluation methods need "significant changes" [p. 23]
50% say they'll need to redesign assignments to be more "AI resistant" [p. 24]
On the latter two topics, see my ✨Premium Guides on how to design assignments and assessments in the age of AI and how professors can discourage and prevent AI misuse.
The Tools & Resources Faculty Actually Want
Here's where it gets particularly interesting. The survey reveals what faculty believe will enable them to integrate AI effectively [p. 36]:
65% want better access to AI tools and resources. But it's not just about having the tools — it's about knowing how to use them effectively.
64% are calling for training in AI literacy and skills. This matches what I've been hearing in my webinars and faculty workshops — there's a hunger for practical, hands-on guidance.
60% want a collection of best practices and use cases.
50% need clear guidelines on AI in teaching.
31% want environments that encourage innovation and tolerate failure in AI use.
Page 36 of the Digital Education Council Global AI Faculty Survey 2025
Again, while 86% see themselves using AI in future teaching [p. 21], a mere 6% feel their institutions are fully supporting their AI literacy development [p. 35].
It's a gap that needs urgent attention.
What Do You Think?
The survey shows that, in general, faculty worldwide aren't resistant to AI — quite the opposite, especially beyond the USA and Canada. They're eager to embrace it but need proper support.
If you’re here reading AutomatED, you’re probably in the same boat. But this survey makes me wonder: how can I provide you the most valuable resources?
I've created a brief 5-minute survey to understand what you need most, if you have a few minutes to complete it. It’s been a while since I’ve run one of these and would like to get some more insight into what you, my subscribers, want to read or watch.
I will share the responses in aggregate form later on, so stay tuned for that.
📢 Quick Hits:
AI News and Links
1. A new podcast episode from Ed-Technical reveals details about Google's "LearnLM," a family of large language models specifically designed for education. These models are fine-tuned with pedagogical behaviors that can't be achieved through prompt engineering alone. The episode features Google DeepMind researchers explaining their multidisciplinary approach. (Click here for my ✨Premium Tutorial on LearnLM, in case you missed it this past weekend. The first use case is before the paywall.)
2. A New York Times report reveals how Chinese AI startup DeepSeek built a competitive AI model (DeepSeek-V3) using only 2,000 specialized Nvidia chips — far fewer than the 16,000+ typically used by U.S. companies like Google and OpenAI. (Next week, I will discuss reasoning models and the more ed-focused impacts of “agentic” applications of them.) The achievement supposedly cost just $6 million in computing power compared to Meta's hundreds of millions developing Llama and highlights China's growing influence in open-source AI development.
3. A VentureBeat analysis provides deeper context on the performance of the soon-to-be released o3 model (from OpenAI) on the ARC-AGI benchmark, which is intended to test an AI system’s ability to adapt to novel tasks and demonstrate fluid intelligence (hence ‘AGI’). While o3 achieved an unprecedented 87.5% score with high compute (75.7% under standard conditions), experts emphasize this doesn't equate to AGI. The article reveals the high computational cost — $17-20 per puzzle at standard compute, and 172 times more for high-compute performance.
4. Speaking of which, a new 3,000-question test developed by the Center for AI Safety and Scale AI — “Humanity’s Last Exam” — is intended to measure advanced AI capabilities. Questions were sourced from experts who were paid up to $5,000 per accepted submission. Current leading AI models performed poorly — OpenAI's o1 scored highest at just 8.3% — but researchers expect scores to surpass 50% by year's end.
5. A new study in BJET by Yizhou Fan et al. reveals concerning findings about AI use in education: while students using ChatGPT showed better immediate performance on writing tasks, they exhibited "metacognitive laziness" — reduced self-regulated learning and deeper engagement with the material. The researchers found no difference in motivation between students using ChatGPT versus other tools, but those using AI were more likely to become dependent on it without developing transferable knowledge or skills.
What'd you think of today's newsletter? |
Graham | Let's transform learning together. If you would like to consult with me or have me present to your team, discussing options is the first step: Feel free to connect on LinkedIN, too! |