2 Objections to Professorial AI Usage

Plus, I share news about my new department course scheduler.

[image created with Dall-E 3 via ChatGPT Plus]

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I respond to two worries about professorial AI usage: (i) that AI cannot be trusted with student data and (ii) that AI cannot express deep field-specific expertise. Objectors are right to worry but wrong to think these objections are decisive — there are practical solutions!

Note that below I post a call for faculty who manage their department’s course scheduling to reach out, if interested, about a new AI-powered tool that I am developing to make this process more efficient. (And if you know the person who does this task for your department, you might want to let them know about it.)

👎👍 2 Objections, 2 Responses

I am an advocate of ethical AI usage. This means I am something of a moderate in the current ideological battles over AI.

I am not an AI skeptic who has blanket opposition to AI development or use, like those who oppose it across the board because they doubt its origins, abilities, or long-term net utility.

But I am also not a blind AI optimist who refuses to recognize its shortcomings and negative effects.

In any debate about AI usage, I seek to first learn more about the context of the usage, the users, and the tools being used, including how they were developed, what data they have access to, and more. Until I learn about these specifics, I tend to not have a view on the matter — I need more information.

It’s just like gossip.

I try to wait to judge when I hear someone complaining about an acquaintance’s or colleague’s behavior without context. I don’t immediately pile on. I have a lot of trouble just nodding my head and saying “Yeah! You’re right — that was wrong of them!” I want to know how different people saw the events, why they might have interpreted them in the ways they did, and so on.

I want to be an ally but not of the wrong cause.

To return to AI, I must say it’s often strange to occupy this middle ground. I run into AI optimists who ignore massive pedagogical and ethical issues with the AI use cases they promote, and I run into AI skeptics who either lack a basic grip of the empirical facts about how AI works or ignore straightforward ethical solutions to their doubts.

Today, I will focus on two objections to professorial AI usage that I have encountered just in the past two weeks (and it’s not the first time, believe me!). After explaining why each is worth taking seriously — I am not belittling those who have these objections — I will respond with a straightforward ethical solution grounded in empirical facts.

You can be your own judge about whether I succeed…

The First Objection:
AI Cannot Be Trusted with Student Data

What the Objectors Say and Why They Have a Good Point: 

AI tools like ChatGPT suffer from insurmountable student data privacy risks, whether owing to the motivations of the companies who produce them, the ways in which they process and handle their users’ inputs, or otherwise.

Even though many AI tools’ developers offer a litany of promises about how they handle users’ inputs, it isn’t clear why we should trust them. Reasons to doubt abound, including:

  • Historically, they have used many users’ inputs for training data

  • They are running out of high-quality non-synthetic training data

  • There is no way to evaluate how they use users’ data; there is no publicly accessible and reliable way to inspect their processes

As a consequence, we shouldn’t use them to analyze or evaluate our students’ work, including homework assignments or assessments.

My Response:
Use AI Without Trusting It

How You Can Still Use AI:

In the context of student data privacy, what makes a piece of data private is whether its association with a person is accessible.

If I am a student and you are my professor, and if you find an essay labeled with my name and an “A” grade in the hallway, copy it, and publish it as yours in the New York Times, your column doesn’t thereby violate the privacy of my data. No one knows that I wrote the essay or that I got an A! Instead, you’ve stolen my work — a separate sort of ethical violation — and taken credit for it — yet another sort of violation.

To wrongfully remove the veil of privacy from my work, you’d need to also publish that I wrote the essay and got an A on it, without my consent. It’s up to me to share; the default assumption is that I don’t consent.

This distinction is at the heart of a range of student data and information privacy laws, including the USA’s FERPA. It is manifested in different ways under different conditions in these laws, but the core idea is that what makes student data “personally identifiable” and thus (defeasibly) private is that it is data that could the reveal the identity of the student associated with it, when combined with other plausibly available information or background context.

(I say ‘defeasibly’ because students can consent to having it released or shared, and there are special cases where it can be released or shared, like for certain financial aid purposes.)

Here’s another example from my ✨Premium Guide on Ethically Using AI with Student Data

A two-column spreadsheet titled “Student 03902” with grades paired with assignment names is not necessarily personally identifiable. On its own, it does not identify Student 03902. Nonetheless, it would be actually personally identifiable, if the spreadsheet could be associated, via other plausibly available information, with Student 03902. For example, if it is public information that Student 03902 is me because, say, 03902 is my university-assigned and public student ID number, then the student data on my grades in the spreadsheet is personally identifiable.

So, what’s the takeaway here?

You can still use AI that you mistrust with student submissions. You have two main options to use AI without trusting it:

  1. Change the Consent Paradigm: Get (explicit, written) consent from students to use any of their personally identifiable data with AI tools. I provide ways to streamline the collection and management of such consent in two ✨Premium pieces, one using Microsoft 365 and the other using Google Workspace. In essence, this amounts to letting your students decide whether they trust AI, as it is up to them to do so.

  2. Pseudonymize or Anonymize: Break the associations between student submissions and their identities before you share their submissions with the relevant AI tools. In essence, this amounts to making it irrelevant whether you trust AI — you don’t leave it as an option for the developers or hosts of AI tools to decide whether they will misuse student data.

An example of the latter is an anonymization/de-anonymization script that runs before and after AutomatED’s “AutoGrader”, an AI-powered and Canvas-integrated assignment evaluator that some of my consultation clients use.

Here’s how it works:

A flowchart explaining an AI-powered workflow some of my consultation clients use.

Before AutoGrader gets access to student submissions, they are stripped of anything that identifies the students. Then, after it finishes, the identifiers are added back. (These two steps are marked with green.) Furthermore, all of this occurs in the professors’ own IT environment — their own Google Workspace cloud, say — without me, AutomatED, or any AI tools having any access to the pairings between the identifiers and the work identified.

The professor gets the benefits of automated grading and evaluation, powered by AI, without trusting the AI to adequately protect student data privacy. For multiple choice quizzes and even some essays and multimedia submissions, this tool can save professors 100s of hours, improve the quality and speed of their feedback, and it can help students learn by getting them the feedback they need when they need it.

Sure, there are further issues, like whether students should still be informed of what the professor is doing with their submissions (this depends on a range of factors, on my view), whether the AI tools’ electricity demands are worth the benefits, and so on.

But this is to broaden the debate to a new set of issues — the current discussion concerns student data privacy.

The Second Objection:
AI Cannot Express My Expert Judgment

What the Objectors Say and Why They Have a Good Point: 

AI tools like ChatGPT lack my expert judgment and thus, for this reason, cannot evaluate student work. As a generalist, AI is trained on data that fails to meet our high field-specific standards.

So, if I attempt to get AI to assess the quality of a student’s essay, project, or portfolio, it will not have the appropriate frame of reference, thereby rendering it unable to reliably judge when and how the student succeeds and falls short.

It is incredibly hard to get AI to get a good grip on what creativity is in art, what logical validity or plausibility is in philosophy, what innovation is in entrepreneurship, or what elegance is in mathematics.

Since our students’ work must be evaluated relative to these ideals, AI cannot evaluate it effectively.

My Response:
Keep Yourself in the Loop

How You Can Still Use AI:

First, note that many of the metrics we use to evaluate our students are not like those listed above.

I agree that, as a philosophy professor, I look for features of their essays that are hard to articulate in ways that AI can evaluate. Logical validity in natural language is still a toughie for LLMs without a lot of careful prompting.

Yet, my metrics and objectives include a range of features that AI can evaluate, like whether their essays have a clear thesis statement related to the module’s content, whether they consider a serious objection to their premises and respond to it, and so on.

This means that AI can still be used to evaluate these sorts of factors, despite the fact that I must get involved elsewhere.

Second, even if I must be “in the loop” at crucial points in the evaluation process, it doesn’t follow that AI cannot be used to assist me at those very points.

Imagine you have a teaching assistant whose judgment about these key metrics is unreliable — the teaching assistant can grade the other parts of your students’ work, but they cannot evaluate aspects like creativity, innovativeness, elegance, or whatever other complex metrics require deep field-specific expertise.

They could still help you with a range of administrative tasks connected to you providing your judgment of those metrics. For instance, they could organize the student submissions, highlight the parts where you need to get involved, and even help you express your expert judgments more efficiently.

This last point is big, so let me repeat it: non-experts can still help experts express their expertise faster and better. And AI is a very, very impressive non-expert.

I’ve covered this in the past when I discussed how you can use ChatGPT to grade 2-3x faster by creating a custom GPT to convert your rough notes on student submissions into polished and cohesive feedback that explicitly relates to your rubric. (✨Premium subscribers have access to a GPT that does this for them, and I build more advanced versions for my clients; there are a range of options here.)

This is a case where the critics have a point about the limitations of AI — at least for the time being — but where they also are lacking imagination about how useful it nonetheless is!

Are you convinced by my responses to these objections?

Click and share your reasons (I may quote you anonymously in the next newsletter).

Login or Subscribe to participate in polls.

1. OpenAI unveiled "canvas," a new interface for ChatGPT that enhances collaboration on writing and coding projects. Available now in beta for ChatGPT Plus and Team users, canvas opens in a separate window, allowing users to work side-by-side with AI. It offers inline editing, targeted feedback, and specialized shortcuts for both writing and coding tasks. In essence, OpenAI trained GPT-4o to act as a creative partner, improving its ability to make context-aware edits and suggestions. This marks ChatGPT's first major visual interface update since launch, and it has parallels with Claude’s Artifacts feature (which I have noted in the past is useful for training students to use AI).

2. Google.org is investing $25M+ to boost AI education, aiming to equip over 500,000 U.S. educators and students with foundational AI skills. The funding supports five organizations developing AI curricula, teacher training, and inclusive learning experiences. Recipients include the International Society for Technology in Education, 4-H, aiEDU, CodePath, and STEM From Dance. This initiative, part of Google.org's $75M AI Opportunity Fund, addresses growing demand for AI skills in education, with 30% of K-12 educators already using or experimenting with AI tools (this number they get from the year-old RAND report I shared a while back).

3. Georgia Tech researchers have created The Socratic Mind, an AI oral assessment tool that uses the Socratic method to test students' knowledge. The tool aims to deter cheating and improve critical thinking skills by engaging students in interactive discussions. It's being piloted with 2,000 students this semester and could have applications beyond education, such as interview prep.

4. AI can improve or hinder coding education, according to Lehmann, Cornelius, and Sting in "AI Meets the Classroom: When Does ChatGPT Harm Learning?" The results, in a nutshell: (a) students asking AI for explanations learned more, while those using AI as a crutch for solutions learned less; (b) copying and pasting enabled overreliance; (c) beginners benefited most from AI access but were also most prone to misuse; (d) students felt they learned more with AI than they actually did; and (e) the authors argue that educators must guide proper AI use to maximize learning. (Another study in the same domain here.)

5. Princeton physicist John J. Hopfield and British Canadian professor Geoffrey E. Hinton have won the 2024 Nobel Prize in Physics for their pioneering work in artificial intelligence. Their research since the 1980s laid the foundations for modern machine learning, including Hopfield's work on neural networks and Hinton's image recognition breakthroughs. Hinton, who left Google in 2023 to speak freely about AI risks, accepted the prize from a “cheap hotel” room in California.

🧪 Call For Beta Participants:
Department Course Scheduler

In the next few weeks, I expect to have completed the beta version of an AI-powered department course scheduler.

As inputs, it takes whatever information departments and academic units currently use to schedule their faculty's teaching assignments, regardless of what it looks like. While some departments may provide perfectly quantified faculty preferences, most will provide a jumbled and idiosyncratic spreadsheet with the course schedule, along with some notes in varying formats on what each faculty member wants to teach, can teach, and needs to teach.

In essence, the app will act as a human scheduler who can figure out what’s going on with whatever they are given by the department to complete the task.

(Departments that want to use a homogeneous and orderly input form will also have that as an option; I won’t be holding my breath on uptake.)

As outputs, it generates an assignment of faculty to courses — and adjustments to courses within specified constraints, if this is desired — that fits the provided faculty preferences and other constraints.

If you’re interested in using this sort of tool, please reach out to me via email to express interest. Either I will let you know when it is available for purchase (don’t worry: it will be much cheaper and easier to use than pre-existing tools/services) or you can get a discount by helping me test the beta version.

Email me by responding to this email or clicking the below button:

And if you don’t handle scheduling for your department but know who does, hook them up!

✨Upcoming and Recent Premium Posts

This Week - Tutorial on All Major Functionalities of Microsoft 365 Copilot (sorry, this was delayed)

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.

Graham

Let's transform learning together.

If you would like to consult with me or have me present to your team, discussing options is the first step:

Feel free to connect on LinkedIN, too!