GPT-5 is Coming and What To Do About It

Plus Los Angeles Unified School District goes all in on AI.

[image created with Dall-E 3 via ChatGPT Plus]

This issue is brought to you by

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I discuss why the forthcoming GPT-5 reinforces the need for educators to future-proof their use of AI (and their strategies to avoid its misuse), and I share some big news from Los Angeles.

Plus, I provide you free access to our most popular Premium pedagogy Guide on how to prevent and discourage AI misuse. Professors from the University of Southern California to Johns Hopkins have bought Premium because of this Guide. Now is the time to see what Premium offers you, with prices going up this Friday!

💡 Idea of the Week:
Future-Proofing AI Uses (+ Avoiding Misuses)

Since a recent interview of Sam Altman of OpenAI on the Lex Fridman Podcast, there are more rumblings about the impending release of a new family of GPT models from OpenAI. It seems that it will be called ‘GPT-5’, it may come as soon as this summer, and it will be the first big jump in model family from the company since the release of GPT-4 in March of 2023 (though there have been many smaller updates since then).

Reminder: GPT-3.5 is the family of models that has been powering the free version of ChatGPT since March of 2023, while GPT-4 has been powering the paid version (ChatGPT Plus), including custom GPTs like our Course Design Wizard, along with Microsoft Copilot (formerly Bing Chat; click here for our explainer).

While we don’t know what exactly to expect from the new model, here are a few options:

  • Increased context window size

    • This is the limit to the size of the input that you can give AI at a time. You can think of it as its short-term memory. Currently, GPT-3.5’s context window is maximally ~16000 tokens (for GPT-3.5-turbo), while GPT-4’s is maximally ~128000 tokens (for GPT-4-turbo). ~1000 tokens = ~750 words. Perhaps GPT-5 could compete with or go beyond Google Gemini’s Ultra 1.5 model, which has a context window of ~1000000 tokens.

  • Improved reasoning capabilities

    • Reasoning capability can be hard to characterize in general terms, but there are many quantifiable benchmarks that AI researchers use, including comparisons to humans on standardized tests, like MMLU (Gemini already outperforms humans on this test), and results on LLM-specific challenges, like Big-Bench Hard. We would expect GPT-5 to improve significantly on these benchmarks, especially the tougher-to-crack ones.

  • Improved multimodal capabilities

  • Different interface for ChatGPT

    • Many people complain about the limitations in the current user interface (UI), particularly regarding navigation and management of long conversations or research tasks. We might see GPT-5 come with a more intuitive and efficient interface, making it easier for users to track and expand upon their interactions with the AI.

  • Improved Retrieval-Augmented Generation (RAG)

    • RAG is how GPTs integrate specific data sources into generative outputs, like the documents you upload to ChatGPT or the real-time search results referenced by custom GPTs that have web access. This integration could be refined, allowing GPT-5 to more effectively rely on specific sources and pull from more current sources, enhancing its usefulness for nuanced, domain-specific, and time-sensitive research.

These possibilities bring me to my idea of the week: you should always have an eye on how “future-proof” your AI uses and avoidances are.

For using AI: When developing AI-powered tools or content, aim for designs that are easily updatable or modular. This approach allows you to integrate new AI advancements without overhauling your entire workflow or method. For instance, in the medium term (not to mention the long term), you should use AI to lesson plan in a way that allows you to incorporate new models with minimal disruption, ensuring that it remains a valuable resource regardless of the underlying AI technology.

For structuring courses: Similarly, when integrating AI into curricula, focus on principles and skills that transcend specific models. Instead of crafting a course around the peculiarities or limitations of GPT-3.5, for example, emphasize critical thinking, problem-solving, skill development, and the ethical use of AI. This ensures that your course remains relevant and valuable, even as newer models emerge with capabilities that could easily circumvent today's assignments or challenges. We have discussed this a lot in connection to our “AI-immunity challenge,” which we will be resuming in the coming weeks (prior installments here, here, and here).

As educators and technologists, our goal should be to harness AI's potential in a way that is sustainable and resilient to change.

This means embracing the constancy of evolution in AI, preparing for the future by anticipating change, and fostering adaptability in our tools, courses, and pedagogical approaches. By focusing on the enduring aspects of AI's impact on education, we can create learning environments that remain effective and engaging, regardless of the pace of technological advancement.

Free AutomatED Premium Guide:
Discouraging and Preventing AI Misuse

This Friday, our Premium tier increases in price,
so now is the time to lock in our current annual rate!

To show you what you would gain from Premium, this week we are providing free access to our comprehensive Guide to preventing and discouraging AI misuse by students.

This is our most popular Premium pedagogy Guide by far (~15000 views), released in August of last year.

We are increasing the price of our Premium subscription to better reflect its growing value — and to fuel this very growth. It comes with:

  • Pedagogy Guides

  • Productivity Tutorials

  • AutomatED’s AI tools: early access

  • Discounts on Webinars

  • Q&As

📢 Quick Hits:
News Tidbits for Higher Educators

  • Los Angeles Unified School District (LAUSD) released “Ed”, which is an AI chatbot available in multiple languages that they pitch as a “personal assistant” to help create individualized learning plans for students, answer administrative and logistical questions (e.g. about student grades or bus arrival times), and provide resources to help students learn on their own time.

    • Why it matters: With more than 400000 students, LAUSD is the second largest public school district in the United States, so they have a massive influence on educational norms in the K12 setting. Simply releasing an AI tool like Ed is a massive move by LAUSD, but other steps they are taking are going to be norm-setting, like their requirement that students must first complete the “Digital Citizenship: Artificial Intelligence Course” before using it or their decision to use a sandboxed AI model (something we have discussed before).

  • In a complicated deal, Inflection — the creators of the Pi chatbot, widely seen as more “personal” and natural at communication than other AI chatbots — has been effectively gutted by Microsoft.

    • Why it matters: The underlying tech behind Pi makes it more useful for use cases where human-like communication is essential, like with younger students or in tutoring contexts. Microsoft will distribute this underlying tech in its own right but also integrate it in Copilot, Bing, and Edge, at least on the “consumer” side, via a new unit: Microsoft AI. This makes it more likely that those institutions running Microsoft 365 will benefit from Pi’s strengths.

  • Elon Musk has complained about how OpenAI is not sufficiently transparent about its data use practices and developmental intentions, especially since he interprets these as in conflict with its (partially) non-profit mission. He then sued OpenAI. OpenAI has responded point-by-point to the claims in the suit. To draw attention to this dispute, Musk has now released the code and weights behind Grok, the AI model developed by xAI, an X-associated team.

    • Why it matters: There has long been debate about how transparent AI development should proceed, given the risks and opportunities that come with making it easier/cheaper for more people and companies to leverage other developers’ research and products. For educators, we need to reflect on the benefits and costs of staying within proprietary AI ecosystems, like those of Microsoft or Google, versus relying on open-source and more locally hosted alternatives.

📬 From Our Partners:
An AI Literature Analysis Tool

Stay on the Cutting Edge of AI Research

Sign up now to get personalized summaries of the most important new AI papers delivered to your inbox daily with GoatStack's free AI Agent.

Our AI reads 4000+ of the latest research papers and handpicks key insights based on your interests. Get custom briefings on topics like NLP, computer vision, robotics, and more without reading every paper.

GoatStack removes the hassle of staying up-to-date so you can focus on advancing your own AI. Start your free trial today.

Let us shape the newsletter for you - just reply with feedback so our AI can learn your preferences.

The future of AI, personalized. Follow us on Twitter @GoatStackAI to engage in discussions about the latest AI innovations and news.

👀 What to Keep an Eye on:
Our “Train Your Students to Use AI” Webinar

Our next Zoom webinar will focus on training students to use AI, and it will occur on Saturday, April 20th from 12pm to 1pm Eastern Daylight Time.

A 10% discount is available to Premium subscribers, included below.

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.