AI vs Ivy: The Prestige Destroyer?

I reflect on AI's potential equalizing effect for undergraduate education. Also: last call for Friday's AI prompting webinar!

[image created with Dall-E 3 via ChatGPT Plus]

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I reflect on what would happen to prestige-driven higher education institutions, were AI to continue to progress.

Remember that registration for my October 4th webinar on “How to Use LLMs like ChatGPT as a Professor" closes this week. The event is on Friday! (Recording and other materials available to those who cannot make it in person.)

💭 Sharing My ChatGPT Prompts

Last week, I shared 10 ways I use AI, with a focus on LLMs like ChatGPT, Claude, and Gemini 1.5 Pro that I use as a teaching and researching professor.

The post was one of our most popular this year. In fact, it went viral on LinkedIn (yay?).

This Friday, I host a webinar on the same topic — “How to Use LLMs like ChatGPT as a Professor” — where I will share some of the actual prompts I use for those 10 tasks.

But attendees will also leave the webinar with three even more valuable takeaways:

  1. A strong grasp of how to prompt LLMs in general, with a focus on long context (e.g. Gemini 1.5 Pro) and customizable AI tools (e.g. custom GPTs)

  2. An understanding of which LLMs to use for which tasks and how to integrate them in workflows

  3. Hands-on experience using LLMs for a task that they care about — with one-on-one feedback from me to solve any problems or iron out any bugs

There is a 60%-off discount for Premium subscribers, and there is a 50%-off post-webinar refund available for those who register this week and share a testimonial afterwards.

If you are interested, you can learn more here (or via the button below), and you can sign up here.

The webinar will be capped at 50 attendees, and there are only 23 spots left, so be sure to snag yours soon!

If you missed my piece from last week, you can read it here and see it in its viral glory on LinkedIn here.

1. OpenAI has announced the rollout of Advanced Voice to all ChatGPT Plus and Team users (except for those in the EU, the UK, Switzerland, Iceland, Norway, and Liechtenstein), with new features including Custom Instructions, Memory, five new voices, and improved accents. (Google’s Gemini Live has also been made more widely available, and Meta has unveiled Natural Voice Interactions, a new AI feature that enables users to have voice conversations with AI assistants across its apps, including Instagram, WhatsApp, Messenger, and Facebook.)

2. Glenda Morgan argues that current responsible AI guidelines for educational technology have a significant blind spot by failing to address the environmental impact of AI. She contends that this omission undermines the credibility of these guidelines and calls for EdTech vendors and organizations to seriously consider and transparently address AI's environmental costs in their ethical frameworks.

4. Tech journalist Timothy B. Lee offers an in-depth analysis of OpenAI's new o1 models, explaining their improved reasoning capabilities and the reinforcement learning techniques behind them. Through a series of tests and comparisons with previous AI systems, Lee demonstrates o1's strengths in areas like math and programming while also pointing out its limitations, particularly in spatial reasoning.

5. The University of California, Los Angeles (UCLA) has announced a partnership with OpenAI to integrate ChatGPT Enterprise across its academic, administrative, and research functions. This university-wide implementation will make a tailored version of ChatGPT available to UCLA's students, faculty, and staff, with plans to solicit project ideas for using AI to enhance student success, research efforts, and institutional efficiency later this year.

6. The University of Calgary has developed SMARTIE, a suite of AI-powered applications using GPT-4 to help educators create inclusive and comprehensive course outlines, incorporating features like EDIA-aware learning activities, rubric design, and experiential learning assessments. (See also AutomatED’s own Course Design Wizard GPT for an AI-focused analog.)

💡 Idea of the Week:
Profs on Every Smart Phone?

After 10 years in academia, publishing and teaching at universities in the US and Europe, I can confirm: prestige is the coin of the realm. (And it becomes more and more valuable as one goes “up” the research university hierarchy.)

I won’t try to justify this old adage here. I will take it for granted that prestige is a primary driving force behind many higher education institutions’ activities, in no small part because it butters their bread. Prestige draws faculty, grant funding, students, donations, and more.

Instead, I am going “big picture” today to reflect on what would happen to prestige-driven institutions, were AI to continue to progress. More specifically, my interest is in the effect of such AI on their prestige in the eyes of potential undergraduates.

Suppose that AI reaches the point that it can complete research in all the fields covered by universities and, furthermore, that it could teach as well as the average professor.

In this scenario, AI is not only a serious threat to these institutions’ (human) faculty, I will contend, but it is also a threat to these institutions as educators of undergraduates, at least insofar as they are slow to adjust or they lack what ubiquitous intelligence doesn’t bring.

You might be thinking that this scenario is far-fetched and not worth considering, but there is reason to not be overconfident here.

One way this scenario might be realized is via AI that has intelligence that is domain-general like ours — so-called ‘AGI’. There is significant uncertainty about when/if AGI will be achieved, as well as whether it is a coherent notion in the first place, but it is a goal of many of the AI developers. Others, like ChatGPT creators OpenAI, take it very seriously (see CEO Sam Altman’s recent blog post for his view). Indeed, it may be the best way to make sense of the financial trajectory of OpenAI (more here), in light of skepticism from pessimists.

Another more likely option is a confederation of domain-specific AI systems that each can complete a component of the activities of the currently human researchers and professors. The level of intelligence needed is what researchers call “high-level machine intelligence,” which is AI that can complete all human-completable tasks cheaper than humans. Recent surveys of AI experts peg its likelihood at 50% by 2047.

So, I think the odds of my scenario are sufficiently high to take it seriously. But if you want to quibble about dates or probabilities, just imagine a world where it is realized.

I then ask: what happens in the prestige-driven ecosystem of higher education when such AI faculty are realized?

The natural thought is that the human faculty themselves would be threatened with obsolescence. Of course, this depends on the costs, financial and otherwise, of the switch to AI faculty. It may be that the finanical costs, once not carried by venture capital, IOUs, and the current enthusiasm, are too high.

That said, it is clear why the thought is natural. If AI can complete the two main functions of professors — research and teaching — just as well as they can, but at lower cost, then what role would human faculty play? (And this is to set aside the point that it is unlikely the AI faculty would somehow plateau at human-level aptitude.)

You might resist this thought because you think there is some human essence that AI can’t replicate. Perhaps there is. Perhaps human faculty will always play a sizable role in higher education, even if intelligence comparable or better to ours is ubiquituous and cheaper than theirs.

However, at the very least, it is becoming less clear what such humanness might be or why it matters in this context, given the trend of recent technological advances, whether in more autonomous goal-seeking by AI agents like Devin or Astra, or in the ability of AI like EVI to express emotional intelligence and AI like GPT-4 to persuade.

So, let’s continue down this road.

If AI could produce research and teach students just as well as human professors do, but at lower cost, then universities who fall behind in shifting to AI faculty would be at a competitive disadvantage.

Crucially, prestige’s role would significantly diminish, especially when it comes to undergraduate education. But why?

So far, one of the striking things about the AI revolution of the past few years is that the most powerful models are available to everyone; my GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro are the same as yours — or Bloomberg’s. Bloomberg dumped millions of dollars into a custom model for financial analysis, only for it to be beaten out by the latest generalist model a few months later.

If the AI that can function as professors are similarly duplicable, then the divide between two universities would become largely a function of quantity of AI faculty, not quality. This would reduce the prestige differential because the AI faculty would be available to anyone, anywhere.

Sure, some universities have access to proprietary databases or unique research subjects, as well as laboratory space and technologies, that would enable them to retain or gain prestige amongst those in the know, even if other universities have the same AI researchers as they do. I don’t want to downplay the value of such repositories.

And richer universities will produce more research with greater budgets to spend on AI faculty and researchers.

But generally, these factors don’t matter for undergraduate education and don’t trickle down to have a significant effect on whether institutions’ undergraduate offerings are perceived as prestigious, or so it would seem.

Indeed, attending the likes of Harvard looks a lot less appealing if you can access literally the same quality teachers via any device that has internet access — or on your local community college’s campus.

This isn’t to claim that universities won’t be able to differentiate themselves in other ways. We already see competition for undergraduates on many other dimensions, like sports, culture and traditions, the “experience,” on-campus amenities, and location. Those factors will become more and more central.

That is, again: AI is a threat to (human) faculty unless they provide something that intelligence alone doesn’t bring, and the same goes for universities and colleges.

This sounds like a bad outcome for the faculty, at least relative to the status quo, assuming what they provide (that AI can never provide) isn’t significant. Then again, maybe that assumption shouldn’t be made — right now, AI is certainly far from replacing a good teacher in the in-person classroom context.

But note that this outcome is conditional on the realization of universally accessible AI faculty that can teach anything to anyone, as well as research anything that’s put in front of them, at significantly lower cost than at present.

Such a world seems better than the present one — all things considered — even if it isn’t clear when it will happen or how best to manage the transition, my best efforts at AutomatED aside.

✨Upcoming and Recent Premium Posts

This Week - Tutorial on All Major Functionalities of Microsoft 365 Copilot

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.

Graham

Let's transform learning together.

If you would like to consult with me or have me present to your team, discussing options is the first step:

Feel free to connect on LinkedIN, too!