Will Open-Source LLMs Solve Privacy Problems?

Only if they are useful. The release of Meta's Llama 3 brings us closer.

[image created with Dall-E 3 via ChatGPT Plus]

This issue is brought to you by Packback

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I explain the relevance of the news from Meta, I announce updates to our GPT, and I share the link to a free webinar where I will be speaking on Thursday.

💡 Idea of the Week:
Open-Source LLMs for Student Data

Late this past week, Meta released “Meta Llama 3,” the latest in their Llama series of open-source large language models (LLMs).

This is a big deal.

Open-source LLMs are freely available for anyone to use as-is, to modify, to distribute, and to integrate. Historically, the primary challenge with open-source LLMs has been achieving a level of efficacy on valuable tasks that rivals their closed-source and proprietary counterparts (e.g. ChatGPT, Gemini, or Claude). Promising open-source models — they show promise in meeting the capabilities of closed-source and expensive models — have included Meta’s Llama 2, Mistral 7B, and Cohere’s Command R+.

Llama 3 sets a new benchmark for open-source LLMs, offering robust performance and flexible customization options that were once exclusive to closed-source models. Its 70B version (the version that has 70 billion parameters) is competitive with and generally outperforms Gemini Pro 1.5 and Claude 3 Sonnet, while its 8B version generally outperforms Gemma 7B (Google’s pocket-sized version of Gemini) and Mistral 7B. Crucially, the 8B version performs well in general, which is a necessity so that more people can run it on their devices.

Furthermore, Meta has focused on model safety, introducing a range of safeguards to ensure the responsible development and deployment of Llama 3.

But why should educators care? Why is this a big deal for us?

Unlike AI tools that risk exposure of sensitive data, open-source LLMs can be run locally without communicating with a company’s servers — on your own laptop, desktop, or virtual machine — enabling professors to utilize them within a controlled environment or "sandbox," thereby reducing the risk of data breaches.

Yet, safety alone is useless.

Llama 3 — and the immediately preceding open-source LLMs, to a lesser degree — solves this problem. Llama 3’s performance achievements make it worthwhile to play in the sandbox because it can more reliably complete the tasks typical of an educator’s job, if prompted properly.

This is my idea of the week: professors who are opposed to using LLMs because they are worried about the risk of endangering student data privacy with proprietary LLMs (including institutional ones) now should be looking more and more seriously to the open-source LLM space.

We are getting to a point where there are viable LLM options for all professors, regardless of risk tolerance and ethical views on student data privacy.

As I note below, I will be releasing a ✨Premium Tutorial on April 29th on setting up your own local LLM, so stay tuned if this interests you…

➕ Updates to Our Course Design Wizard GPT

Have you tried out our course design GPT? It can produce assignments, assignment sequences, rubrics, and course AI policies. I have designed it to be especially effective when it comes to pedagogical issues related to AI — indeed, that’s the whole point!

I just rolled out some new updates to improve its functionality and reliability for a range of fields and use cases.

Give it a try, if you have ChatGPT Plus!

Please give it a rating, submit feedback, or respond to this email to tell me how it performs for you!

📬 In Partnership with Packback:
Free Webinar on Thursday

Packback, the leading Instructional AI platform, The League for Innovation in the Community College, and yours truly (AutomatED Co-Founder, Graham Clay) are teaming up for a free webinar titled “How to Leverage AI for Increased Classroom Efficiency.”

During this free session, we’ll discuss how educators can best take advantage of AI to improve efficiency, increase classroom engagement, and better prepare their students for the future.

To RSVP for the webinar, click here.

✨Upcoming Premium Posts

April 24 - Guide: Managing and Protecting Student Data

April 29 - Tutorial: Setting Up Your Own Local LLM

🤖 Enroll in Our AI and Higher Ed
Primer Email Series

If you made it this far in the email, we have another surprise for you:

We just finished developing our new “AutomatED Insights Series”!

It consists of 7 additional emails for new subscribers that convey who we are and some crucial information that we have learned and written about in the past year. Like AutomatED, it is designed for professors and learning professionals who want actionable but thoughtful ideas for navigating the new technology and AI environment — and it will get you up to speed with AutomatED, too.

These emails will come in a sequence, with delays, on Sundays, Tuesdays, Wednesdays, or Thursdays.

Every new subscriber to AutomatED is now enrolled in it by default, but you may have signed up before we released it… Sorry about that!

Still, you can enroll yourself manually by clicking “Yes” below. You can always unenroll starting in the second installment without unenrolling from AutomatED’s weekly newsletters.

Would you like to be enrolled in the AutomatED Insights Series?

Login or Subscribe to participate in polls.


Expand your pedagogy and teaching toolkit further with ✨Premium, or reach out for a consultation if you have unique needs.

Let's transform learning together.

Feel free to connect on LinkedIN, too!

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.