AI-Immunity Challenge: Lessons from a Clinical Research Exam
What we learned from using AI to try to crack an exam's iterated questions, verboten content, and field-specific standards.
[image created with Dall-E 2]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
Let’s take a look at the first assignment we tested for our AI-immunity challenge.
With the free and publicly available version of ChatGPT, we tried to crack this real Clinical Research exam in under an hour. It was a challenge because of its interrelated questions — and the verboten and unique content that they concerned — as well as the field-specific standards we had to meet.
Nonetheless, we did just good enough to earn a passing grade from the professor.
We would have done better, had we used an improved version of ChatGPT and had we prompted it more about how to meet the standards of Clinical Research. In fact, we got a B+/A- when we did this.
There are a lot of takeaways here for professors in a range of fields — takeaways that I summarize at the end if you want to skip ahead. Read on for more details…