✨Guide: Designing Assignments and Assessments in the Age of AI
The past two years of our research is gathered into a guide for professors for the 2024-2025 school year.
[image created with Dall-E 3 via ChatGPT Plus]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
In this fortnight’s Premium edition, I present a guide for how to plan and develop assignments and assessments in the age of artificial intelligence, at least as things stand late in the summer of 2024 (when this Guide, originally posted in December 2023, was last updated).

Last year, we were sent over a dozen take-home assignments for our AI-immunity challenge. By trying to solve these assignments in one hour or less, our goal was to establish that AI tools perform better at a range of college-level work than many professors thought possible. We had mixed results with the especially tough cases we covered in our newsletter pieces (see here, here, and here), but all of them revealed impressive and surprising capabilities. (We are still taking submissions, if you have one for us.)
We also looked into AI detectors (and again more recently), thought about how to design assignments that encourage constructive AI use, and came up with ways to design assignments to prevent AI misuse. The latter line of research culminated in our comprehensive Premium guide — from early August — to preventing and discouraging AI misuse in the university setting. We asked a librarian and a professor for their thoughts on the AI era; we worked with educational researchers to learn how to do oral assessments and encourage class discussions; and we have been consulting with a range of professors, departments, and institutions about AI integration. Since then, we have been focused more on how our work as professors can be reimagined in light of the power of AI, whether we are using LLMs to plan lessons, using ChatGPT help grade/evaluate, or using custom GPTs for in-class activities.
In this piece, I build on these experiences to present a comprehensive guide to planning and developing university-level assignments and assessments in light of the risks and opportunities AI presents.
This is my 40000-foot (but 6000-word) take on the option space and how to navigate through it as a professor, at least as things stand as we enter the 2024-2025 school year.
🖼️ The Big Picture
I have created a flowchart or decision tree that represents the following sections, with [A] representing a potential assignment or assessment. This is your map of this guide.

A map or decision tree to conceptualize this guide.
You can skip ahead to a section of this guide via the links in this table of contents.
Table of Contents
Throughout, I will use “task” as the catchall term to refer to what assignments and assessments require of students (and are designed to train students to be able to complete); “skill” as the catchall term to refer to the dispositions and capabilities that assignments and assessments are designed to train students to develop (perhaps in order to complete tasks); and “knowledge” as the catchall term to refer to the representations that assignments and assessments are designed to train students to retain or to be able to recall.
🥅 1 - Check Objectives’ Alignment

The part of the flowchart covered in this section.
The first step to designing an assignment or assessment is to confirm, to the best of your ability, that its learning objectives are suitable for the course you are teaching, given the program within which it is found (e.g., major, minor, certificate, general education curriculum).
Would a student fulfill the goals of the program — or the relevant part(s) of them — by achieving the learning objectives of a potential assignment or assessment?
Should they do so in your class?
To answer the second question, you need to decide whether meeting these learning objectives is, in this precise context, the best use of your time and that of your students. Given the finite space for assignments and assessments in your course, it is likely that there are simply too many assignments and assessments for which you would answer ‘yes’ to the first question, and thus you need to be especially discerning in answering ‘yes’ to the second question as well.
I leave this part of the decision process up to you, the professor, since it is subject to too many context-specific factors to offer general advice about here. While some of the following sections cover considerations and AI capabilities that undoubtedly impinge on this part of the decision process (e.g., by enabling students to achieve a given program goal faster), the ways in which they do so are beyond my purview here.
📚 2 - Learn AI’s Capabilities

The part of the flowchart covered in this section.
Once you have determined that a given assignment’s or assessment’s learning objectives should be achieved by students in your class, the next step is to determine how AI’s capabilities relate to these learning objectives. This requires you to achieve a baseline level of understanding of AI’s capabilities, before you turn to deciding how AI should be involved in your assignment or assessment.
To learn about AI’s capabilities, there are two broad strategies:
(a) experiment with AI tools yourself; or
(b) gather the perspectives of others by reading tutorials and analyses, watching videos on YouTube and elsewhere, and discussing with others what they know.
Let’s tackle these in turn…
🧪 2a - Learn by Experimentation
I have placed this strategy first because I think it is more important than the second one. Although there are other reasons to think that experimentation with AI tools is crucial for professors, there is one main reason that I think is sufficient: you are teaching your course in a way that only you can — there are countless idiosyncratic aspects of each professor’s pedagogical choices — and so you are best positioned to judge how AI tools interface with your approach. No one else can see as well as you can how a given AI tool’s capabilities could be effectively incorporated in your pedagogy. What would make sense for one professor might make little sense for another, and vice versa.
When experimenting with AI tools, you need to get your hands dirty for an extended period of time. This is one of the few hard and fast rules for experimentation. Since there is a learning curve for every AI tool, you need to experiment with each of them for enough time to see what they really can do. Ethan Mollick thinks 10 hours is the minimum, per tool/model (I think this is a good rule of thumb). Your subject matter expertise will make it easy to see whether their outputs are any good, but your ability to test an AI tool’s ability to produce good outputs requires some development.
Here are some examples of tools that I would recommend experimenting with, organized by category…
Large language models:
Large language models paired: Image generators: | Code assistants:
Visual design tools: Productivity tools: |
👀 2b - Learn by Gathering Others’ Perspectives
A core problem with the AI space is that it is massive and complex. The trick, then, is to quickly get synopses of the relevant AI tools’ capabilities from others who have expertise and experience with them.
How I recommend approaching the acquisition of these synopses is via sampling.
By ‘sampling’, I mean looking in a scattershot fashion through the sea of AI information sources for those weighing in on issues that matter to your pedagogy. After you have found some of these sources, you check them out periodically to ensure you don’t miss any big new updates or information, or you do deep dives on their past outputs when needed. When they mention another source that sounds interesting or useful, go check it out and add it to your list of sources. In general, I think of this process as like skimming a ton of books, journal articles, or encyclopedia entries, looking for threads to pursue that are relevant to the thesis of a paper I am writing.
Here are some assorted examples of sources of information that I find useful, to give you an idea of how broadly I think you should sample:
Latent Space (a newsletter focused on AI and engineering/business)
Don’t Worry about the Vase (a newsletter with weekly summary updates about assorted topics in AI, with a more technical focus)
One Useful Thing (a newsletter covering generic/abstract information about LLMs and their development, with a higher ed focus)
The Neuron (a newsletter covering industry updates, prompt tips, and other trends in AI, with a very broad, non-ed focus)
AI Tool Report (similar to the Neuron)
The ChatGPT subreddit (any and all things ChatGPT — literally)
Anna Mills on Twitter/X (writing teacher who tweets/retweets a lot of useful information about AI in education)
The New York Times Technology page (Natasha Singer & Co. with big picture pieces and interviews)
OpenAI’s News page (updates related to ChatGPT)
Anthropic’s News page (updates related to Claude)
Google’s Gemini News page (updates related to Gemini)
🧐 3 - Decide How AI Will Be Involved

The part of the flowchart covered in this section.
Now, once you have achieved a baseline level of understanding of AI’s capabilities, you need to decide how AI will be involved in your assignment or assessment.
The two options here are as follows: either
(a) you design the assignment or assessment such that students are allowed, encouraged, or taught to complete it by using or relying on AI; or
(b) you design the assignment or assessment such that students are not allowed to complete it by using or relying on AI.
Whether you choose (a) or (b) depends on a range of factors that we can gather under the following three headings.
Note: A deep dive dedicated to these factors, in the context of training/teaching students to use AI, is found in my Guide on how to train/teach students to use AI. Go there after reading this Guide if you are at all leaning towards (a) for an assignment or assessment.
The nature of the skills or knowledge that you specify in the learning objectives for the assignment or assessment.
Consider the spectrum from maximally general skills — like the ability to determine whether a source provides evidence for a claim — to highly specific skills — like the ability to use ChatGPT-4o with Advanced Data Analysis to solve a unique econometrics data analysis task. There is a spectrum in the case of knowledge, too, from maximally general knowledge about broad domains to highly specific knowledge about narrow subsets of them.
Whether you should choose (a) or (b) depends, in part, on the generality of the skills or knowledge specified by your learning objectives.
If you are trying to inculcate very general skills or knowledge, it becomes more likely that you ought to favor (b) — that is, students should not be allowed to use AI in completing the assignment or assessment. Perhaps students need to be able to display or deploy these skills or this sort of knowledge in contexts where AI is unavailable to them. For example, if the learning objective relates to conversational charity — a student who fulfills the learning objective needs to be inclined to be charitable to their conversational interlocutors — then the student needs to not be dependent on AI in order to be charitable.
If, on the other hand, the skills or knowledge are bound in tight ways to the AI tools themselves, then it becomes more likely that you ought to favor (a) — students should be allowed to use AI in completing the assignment or assessment. For example, if the learning objective of the assignment or assessment relates to mastery of a given LLM, then using the LLM at some point during its completion is a must. Skills or knowledge tied to AI tools’ use are rarely at the general end of this spectrum.
In reflecting on this factor, you need to have a close eye on the future of your students. Will there be future demand for a certain skill or bit of knowledge in the marketplace that they will find themselves in? Will there be a future need for it in their personal lives? Does this skill or knowledge turn on the specifics of AI tools?
Your students’ preexisting skills, knowledge, and preferences.
Consider the background and the prior knowledge of the students. If students already have a strong understanding of the basic skills and knowledge of your field, it might be more beneficial to choose (a). Perhaps they already have the foundations or fundamentals under their belts, and they are ready to deploy them in the most effective or efficient way feasible — and perhaps AI tools enable them to do so, so they need to learn how to use them well.
Conversely, if students are new to the field, it might be more helpful to first teach them to perform tasks manually, without the help of AI tools (b), especially if they lack a baseline or foundational set of skills or knowledge that is needed in order to judge the quality of an AI tools’ output (a topic I cover at length in my Guide on how to train/teach students to use AI).
It might be fine if students use AI-driven research methods to find sources once they already know how to evaluate the quality of sources. Yet, if they have no ability to discern the quality of a source, they might rapidly run into trouble using AI tools for this purpose.
Finally, you may need to consider the preferences of your students, even if you think they are mistaken or misguided (i.e., ideally would be different), because their preexistent preferences affect the extent to which they will be motivated to achieve the learning objectives you set before them.
If students strongly desire to learn how to use AI tools to complete a given task, then this provides some reason — admittedly defeasible — to respect this desire in your design of the assignment or assessment by leaning towards (a). I can speak from experience that meeting students “halfway” is often a successful strategy to leverage their desires/motivations while respecting my views on what they, ideally, would want to do or learn.
The constraints governing the use of AI to complete such an assignment or assessment.
There are several potential constraints that you need to keep an eye on. One that is particularly salient is the relationship between (i) the amount of training that you would need to provide students to successfully use AI tools to complete the assignment or assessment and (ii) the amount of time that you have available in your course to train them.
Some cases where you might favor (a) are going to be ruled out because you lack the instructional time to set up your students for success. Sure, you can always just release students into the AI wild to attempt to complete the assignment or assessment with AI tools (i.e., without any training), but this tends to simply magnify disparities between students who already have skills or knowledge related to the tools and those who lack them. (Personally, I have found this strategy — where students are not trained at all on AI use — creates many more problems that it solves. I tried it early in my AI pedagogy journey, in 2022.)
Another potential constraint is the availability and accessibility of the relevant AI tools. If they require paid subscriptions, are not accessible for students with special learning needs, or require internet access that not all your students have at home, then this is defeasible reason to favor (b) — not allowing AI use by any student, even if they have the money, lack any special learning needs, or have reliable internet access.
Finally, you want to consider whether the use of the specific AI tools that would be required to complete the assignment or assessment would violate general ethical concerns, like those surrounding privacy or data attribution. We have discussed privacy of student data in connection with AI at great length — see my Guide on ethically using AI with student data for more — but there are many other issues to consider that you will likely have encountered in your prior reading and listening about AI issues.

🛠️ 4 - Develop the Assignment/Assessment

The part of the flowchart covered in this section.
Once you have a clear idea of whether you want to (a) design the assignment or assessment such that students are allowed, encouraged, or taught to complete it by using or relying on AI, or (b) design the assignment or assessment such that students are not allowed to complete it by using or relying on AI, you need to get to work on the actual development process. I will refer to (a) as developing the assignment or assessment to be “AI-inclusive” and to (b) as developing it to be “AI-exclusive.”
🤗 4a - Develop it to be AI-Inclusive

Subscribe to Premium to read the rest.
Become a paying subscriber of AutomatED to get access to this post and other perks.
Already a paying subscriber? Sign In.
A subscription gets you:
- • Access to All Tutorials and Guides
- • Two New Premium Pieces per Month
- • Free Access to Monthly Webinars
- • Access to Exclusive AI Tools
- • Discounts on Courses
- • One Free 1-Hour Consultation with Dr. Clay