Why a Notre Dame Professor Bets AI is Good for the Humanities

We talk to Paul Blaschko, philosophy professor at Notre Dame, about his highly successful courses, professors' pedagogical role, and technology.

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s edition, we talk with professor Paul Blaschko about pedagogy, why he is not afraid of AI, work and good work, what makes a life good, a claim about AI he would bet $200 against, and much more.

Graham Clay: Tell us in your own words who you are and a bit about your role as a professor.

Paul Blaschko: I am an assistant teaching professor at the University of Notre Dame. I've been at the university for about ten years, and during that time I've been in almost every kind of role you could imagine. I was a PhD candidate here and then I was a postdoc. I was a staff person for a year and a half and then transferred over to faculty. Now I also direct a program which we'll talk about maybe a little later.

In addition to that stuff, I'm a father of four young children; a resident of South Bend, Indiana, which I love; and I describe myself as kind of a serial entrepreneur. I've got stuff on campus that I'm working on, some grant funded projects, internal and external. But I've also got a startup that I've been working on with a couple of faculty members here and some people outside the university. It's what I do on nights and weekends when I'm not in my garage drumming with my son.

Graham Clay: And what is your training and background? You mentioned that you were a student at Notre Dame, so tell us a bit.

Paul Blaschko: I'm going to go all the way back to when I was an undergraduate. I attended the University of St. Thomas in Minnesota originally as a seminarian discerning the Catholic priesthood, but we'll skip that bit. There I was a creative writing major and philosophy major, which of course there meant analytic philosophy. One reason I go back to that is because at the time I was discerning very seriously either a PhD in literature or in English, or an MFA. My dream was to become a playwright and a short story writer, but I decided, oddly enough, for job security type reasons that analytic philosophy was a more lucrative and promising field.

Graham Clay: Great choice!

Paul Blaschko: Right? And so I went to the University of Wisconsin Milwaukee, where I discovered that analytic philosophy was broader than analytic Thomism, which is what all of my teachers at the University of St. Thomas were interested in. A big part of my graduate career was just trying to grapple with the historical context that gave rise to analytic philosophy. And this was all outside of the seminar room for the most part.

In my creative writing classes, in my English classes, I was constantly reading continental philosophy or literary theory or critical theory. And to me, that was the richest, most interesting place to find ideas, philosophical ideas, though I've always really resonated with analytic methodology and its emphasis on rigor and clarity and transparency of meaning and that sort of thing. So, I've always kind of bounced back and forth between those things.

When I entered the PhD program here, I said, for the next five years, I will act as if analytic philosophy is philosophy, full stop. I did a PhD with Robert Audi in normative epistemology. I was focused there on this question of agency and responsibility with respect to your beliefs. How can we blame people for believing dumb stuff? This is how I sometimes describe it to relatives or people I meet on airplanes. And all the while I was trying to keep up my interests in these broader, bigger, more public facing philosophical ideas and topics.

A huge outlet for that, for me, was teaching and pedagogy. I found if you get into the world of higher education pedagogy, it's kind of the Wild West, because it's not professionalized. They don't really teach us how to teach, and they don't care how you teach. And so you can do some really cool, really creative things. You could design a whole course around the concept of home or the concept of work. And of course, it's your own classroom.

I also kind of discovered, in part because my wife is a professional trained educator, that there is a craft there. There's technical knowledge that you can gain and that it's a beautiful, wonderful craft to devote your life to. So I spent a ton of time developing courses.

Graham Clay: One of those courses is "God and the Good Life," which has enjoyed tremendous success at Notre Dame. Before I ask you about the relationship between this course and technology, can you explain to us what the course is?

Paul Blaschko: In the spring of 2015, Meghan Sullivan and I got together, and she had this idea: Notre Dame doesn't have a distinctive, almost public facing "Introduction to Philosophy" class like some other universities do. Her touchstone was Harvard's "Justice" class by Michael Sandel. Now, a lot of those classes at the time – and even still – are kind of cults of personality. You get a famous academic, and they're really charismatic, and they put on a show, and a bunch of students want to take it, and eventually they get a book deal or a TV series or whatever. Her idea, which was a really great idea, was that Notre Dame could have this, but it could be more of an institutional thing. It could be a key part of the core curriculum.

With that as inspiration, she and I started brainstorming about what that would look like. And we figured because it's a Catholic institution and because virtue ethics is not just respectable within philosophy, but holds a core place in the tradition, like the entire history of philosophy, that our course was going to be a virtue ethics class. We developed a class called "God and the Good Life," and the idea was that we would introduce students to philosophy not as a discipline. It's not an introduction to the major or an introduction to the professional discipline of philosophy, but an introduction to philosophy as a way of life, to use a phrase we got from some of our colleagues.

For Aristotle and Socrates and Plato, philosophy wasn't something you just did in the classroom. You did it while you were walking around with your friends. For the Stoics, philosophy wasn't the sort of thing that you studied to make advances technically in science and mathematics. It was a response to the deep anxiety of your own mortality. And, so, we thought: "That's a hook!" Students care about that stuff. They worry about that stuff. They spontaneously talk about it in class, even if it's not on the syllabus. Why not give them a first philosophy experience that just foregrounds all of that? That's what we do each day in "God and the Good Life."

We serve about 1200 Notre Dame students every year. I think that it's the biggest class in the humanities here at Notre Dame. There are STEM classes that I'm sure are similar, but we serve about half of the freshman population. Half of the students at Notre Dame take "God and the Good Life" as their first philosophy course.

It's a massive operation. We've got maybe 80 instructors at any given point, some of whom are faculty, usually about three faculty, nine undergraduate or graduate TAs, and 50 or 60 undergraduate peer dialogue leaders that are part of the delivery of the class. And we walk our students through all these big questions and at the end we ask them to write a philosophical apology telling us what they think about the meaning of life and what their answers are to the biggest questions.

Eventually, the course led to a contract with Penguin Press for a book about the philosophy of the good life. It's called The Good Life Method. A course that I'm teaching now, "The Working Life" led to a contract with Princeton for a book that's going to be published in spring or summer of 2025.

So, yeah, I've just found a lot of opportunity, I guess, maybe on the margins or just outside of mainstream analytic philosophy, though that is sort of at the core of my training. And I certainly teach in an analytic department. Absolutely love it. Love to talk to my colleagues about philosophy, but I also just love kind of existing on the fringes of it.

Graham Clay: How does technology fit into "God and the Good Life"? The course seems very traditional, in some ways, but I gather that you rely heavily on technology.

Paul Blaschko: One of the things that I have focused on with "God and the Good Life" from the beginning is making sure that we're using cutting edge technology to ensure that we're delivering the best educational experience for our students that we possibly can. At the very beginning, I felt very strongly and convinced Meghan that this course should exist entirely online. There's no syllabus, there's no paper, there's no textbook, there's nothing students need to buy or bring. The syllabus is the website.

On this website, godandgoodlife.nd.edu, you've got a landing page where a trailer pops up. It's this highly produced video that we made with some colleagues on campus. And then if you click around, you'll find the calendar of readings. If you click into each tile – because we've made visual tiles that represent each day – you'll find links, and in the links you'll find primary texts that have been digitally annotated with pop ups. If you're reading through the Nicomachean Ethics and find the distinction between instrumental and final goods, you can click on the word 'instrumental' and there's a pop up and it tells you what it is – it illustrates it for you.

There are also YouTube videos that we've created, produced, or found and that we embed in the text itself. I felt really strongly about that for a couple of reasons. One is that with all due respect to all of my teachers, and I love my teachers in philosophy, I think I was primarily educated in philosophy by YouTube. The lecture was just kind of sort of Cliff Notes or a little syllabus that then let us enter this online world of ideas on YouTube. For me, that experience was enlightening because it made me realize that education through videos and engaging content could be much more effective than kind of a traditional brick and mortar, paper based, lecture heavy course. There are so many constraints on the classroom format at a university that, unless you're making use of this technology that our students are already familiar with and that they already use to learn almost everything that they need for their daily lives and their careers and their professions and their fun activities and everything else, you're really missing a huge opportunity.

There are also a lot of drawbacks to heavily relying on technology. For instance, if you spend $100,000 on highly produced YouTube videos, and then two years later, Apple comes out with a new cinematic quality camera on your iPhone, then all of your videos look like they were shot on, like, VHS camcorders – that sucks! That's $100,000 down the drain. Another way to think about it is to say, "Okay, we got two years out of that, or we can use some of that stuff in new ways, like make a transcript of it and have somebody redo it or turn it into a podcast." You're constantly kind of battling against innovation in various areas if you're trying to stay on the cutting edge. I used to encourage everybody I knew to do thoroughly digitally integrated classes. Now I think it's certainly the right choice for me, but you need to weigh the pros and cons if you're somebody who is interested but not thoroughly convinced.

Graham Clay: How are you and the other instructors handling the AI shift in a class that's meant to be timeless, but nonetheless is still occurring within this kind of technological transformation? Tell us how you see AI in general, with regards to its role in your pedagogy.

Paul Blaschko: My experience with AI in the classroom and in course design started maybe three years ago, when a lot of ed tech startups started to come out with these platforms to monitor student dialogues and discussions. Packback is the name of the one that first started contacting me. Now, every time an ed tech salesperson calls me and offers a free hour consultation, I always take it and I always listen.

What was interesting to me is they were trying to solve a real problem, a very big problem in contemporary course delivery, especially for big classes, which is that discussion boards suck and they are pedagogically useless. And yet it's almost impossible in a big class, especially if you're delivering it partly or entirely online, to avoid a discussion board type platform or forum. On the other hand, you look at social media and you see that your students are doing exactly what you want them to be doing in the discussion boards, but just on their own, unguided, and without you as part of that conversation or discussion.

I tried to make private Facebook groups that my students would use alongside in their classes. That worked all right. But it kind of felt like I was walking into their living rooms and telling them to start discussing philosophy. And then, of course, the millennials stopped using Facebook.

The thing that creeps me out, and it creeps everybody out, including my students, is if you've got an online discussion forum that is somehow AI monitored or guided or directed, then it's creepy. If I put something on there, if I say, "Okay, here's my thought about Aristotle," and then an AI bot amplifies it or grades it or spits it back at me or shares it with somebody else or categorizes it, that's weird. That's very weird.

I do use Perusall, which has an AI algorithm that automatically ranks and assesses student work, gives me feedback I can use in the classroom, gives students feedback, and I found ways of integrating it and incorporating it. They're pretty labor intensive, in some cases, sometimes on the front end. So, in the design and the kind of implementation, there's a lot of labor to make sure that you've calibrated it right, and that you've introduced it in the right way to your students, that you've taught them the tool, et cetera. It sometimes saves time, it sometimes is effective, and sometimes it is not.

Graham Clay: I gather that the final assignment for "God and the Good Life" is an apology, in that students explain and justify who they are as a philosopher, as a thinker, as a person. That assignment, if it's a take home essay, is particularly susceptible to plagiarism with LLMs. Tell me how you think about AI plagiarism, especially in connection with this course.

Paul Blaschko: Before I had ever heard of GPT, I had a colleague on a consulting call (I consult with other universities about how to design high impact, digitally integrated courses) who showed up one day and said, "Okay, it's over." They're like, "Teaching philosophy is over!" They showed me GPT, and of course I was a skeptic. It was terrible, at the time.

Within six months, it was spitting stuff out that was better than the freshman level papers that I was receiving. And these are Notre Dame students. And if you learn how to prompt it in the right way, it's much better. It's like, better than some grad student papers that I've read. It's better than my papers. I then empathized with the moral panic that I started seeing come out in these op eds, in The New York Times and everywhere.

Then I thought, "Okay, this isn't the end. This is something exciting. This is the next thing." Now, the first time that we really had to deal with it was this past semester, or the first time I dealt with it in class was this past semester in "God and the Good Life." And here's what I told my students: "I've designed an educational experience that I believe is intrinsically valuable for you. And if you go through it and you write this final apology essay, you're doing something not even instrumentally valuable. Yeah, of course you're gaining critical thinking skills. You're writing arguments. This will be important when you're in your job. Your employers will love it, whatever. But really, I do, to the core of my being, believe that every person should write down articulate in sort of an apologetic form, in an argumentative way, a defense, an articulation in defense of their worldview."

If I was a freshman, I would think "Yep, great. You have no idea how busy I am, how many pressures I'm under, et cetera. Thanks for your sort of paternalistic advice. I'm still going to use it." My conversation with them was like: "Absolutely use it." I don't know if I'm going to get push back for this, but I use ChatGPT constantly in my writing. But the way I use it is as a research assistant. I think it's more powerful than Google. I'm not going to ask it for citations or references because it hallucinates a lot with respect to those things. But I ask it, "Can you just reorganize this information for me? What's the most important idea here?" I put a paper in the other day, and it was on the etymology of the word technê, and I just said like, give me a one-page summary of this thing. And it saved me 6 hours of work.

So, when I talked to my students about it, I said, "Look, if you want to use ChatGPT, you are absolutely welcome to use it. But you have got to use it well. And that itself is a skill. And it's a skill I'm not going to teach you." My deal with them was if they use it, they need to write me a paragraph explaining exactly how they used it. And I said, "You can't take any text and directly insert it without quoting it. Just like a paper, you can't plagiarize." (To use kind of a term that doesn't quite fit, I would say.) The papers that I got back, a lot of students reported that they didn't use it at all. Of course, some of them were lying, but a lot of them – maybe half – said they didn't use it at all. Now, I'm sure there are some that fell through the cracks. I'm sure there were some that really heavily used it, didn't cite it, didn't explain it. There are also students that hire somebody online to write their papers and I don't catch them. But for me, if you can prompt it, if you can use it in the right sort of way, if you're an ethical person and cite it in the right sort of way, and the end product is the thing that I'm looking for and it's great, that's awesome. And it requires a lot of the skills that we do care about in the class. And so, I'm not at all afraid of ChatGPT.

But it is something that I think we should think quite carefully about, and it is something that I think is a pretty massive threat to a lot of traditional methods and modes that are used in higher education. But those are mostly things that I think need to die. And, so, if it kills it quicker, great!

Graham Clay: You developed a new course about the role of work in our lives as part of the program that you direct, the Sheedy Family Program in Economy, Enterprise, and Society at Notre Dame. Can you explain this program and the course?

Paul Blaschko: The program is a cohort-based honors program mixed with a minor where students come in and study the intersection of business and the liberal arts. We've got a lot of students from our business school that have a minor or major in arts and letters. We've got a lot of history, philosophy, Program of Liberal Studies majors who've got a minor in digital marketing or something like that. So, they've got to have both those interests in that program.

The second course in what I think of as the core curriculum for the program is a class that I teach called "The Working Life." And it's kind of like "God and the Good Life" at work, in a sense. The central question is: if you want to have a happy, meaningful, contented life, what role should work play in that life?

We draw from a lot of classical sources. We read Aristotle at least four times throughout the semester. We spend a couple of days on the Stoics. We even dive into existentialism. We're asking Marcus Aurelius or Epictetus – who wrote a surprising amount on this question – about work. Epictetus says, "Look, if you are so dependent on the praise of your boss, if you are so set on becoming wealthy that you are in constant turmoil, you're never going to be happy. It doesn't matter if your boss claps for you every single day as you walk into the office. If you are promoted seven times in the first year, if you get $200,000 as a salary in your third year of working, you're never going to be happy, because this attachment to these contingent things outside of yourself guarantees that you're never going to find a stopping point. You're never going to be content." And then he gives really practical advice. He says, "You've got to meditate on those things that are in your control and unchangeable things like virtue." My biggest challenge in the course is to motivate my students to spend enough time to see that there's something directly relevant in these texts, in these classical texts, and I do a lot of things to try to make that happen.

Work has to always be evaluated with respect to the idea of human flourishing. At the most basic level, I think we should define work as an activity that has instrumental value. It's not a final good or a final end in and of itself. It never is. I think we should define good work as work that promotes or constitutes a flourishing human life. And there's a lot packed into "flourishing." Again, I defer mostly to Aristotle and quite a bit to Aquinas on this kind of stuff. But I think you need a really robust philosophically anthropologically robust notion of what it is that we are, what human nature is.

Let me give an example. I have students say, "I want to be a management consultant. Is that good work?" And my first question is, "Well, what does a flourishing life look like for you when you're 50?" And they're like, "Well, I want enough money that my family is taken care of." And I'm like, "That presupposes that you've had enough time to have a family and that you haven't been so sucked into your work that you have developed and cultivated relationships such that they want to be around you and that you've been able to afford a house so that they can live in it. Et cetera." Not all of your time is going to be spent earning more money or something like that.

Now, there's a lot of hard questions that this opens the door to. For instance, are the twelve-hour days – six days a week – worth it that you have to commit to? Are you willing to get burned out along the way? Are you willing to accept the cost of living that comes with the tastes you're going to develop and luxury and the quality of life that you want? You start to have to think about those really hard tradeoffs. Now, for me, that's a huge win. That's a huge victory, right? Because the framework has allowed them to start evaluating in really concrete ways decisions that they're going to have to make in a week or a month or a year. It all goes back to this really simple idea that good work is work that's integrated into a human life such that it promotes, is conducive to, or is constitutive of human flourishing.

I do think that there are people who are situated such that good work is not currently available to them. And I think that is a political, structural, systematic problem. There are various duties and obligations we've got collectively to address it. But it also puts good work within reach for many people. Again, depending on your vision of the good life, of the distinctively human good of flourishing, it can help free you from some really big myths that can be life destroying.

As you can tell, I do believe strongly, as a practitioner of philosophy as a way of life, that my role as a professor is both to be an intellectual guide but also to develop a school of thought. And that's controversial. And I've talked about that at a lot of pedagogy conferences, a lot of philosophy conferences. I get a ton of push back on it, but I think it's not as radical as you might think if you look at the history.

Graham Clay: What do you mean by "school of thought"?

Paul Blaschko: Students who went to Athens to study with Plato weren't just going to get a bunch of technical skills with no values imbued in the curriculum at all and then go out and just kind of make all their own decisions about life, et cetera. They were going because Plato and Socrates had articulated a view of the good life, and they were attracted to that view.

Now, it's not like they were indoctrinated. It's not like the ideology had to be just accepted. They were taught to critically examine it, think about it, fight with the teacher. And indeed, the best students did. Aristotle writes that we can't just accept something because our friends believe it. We've got to critically interrogate it. Sometimes he makes fun of Plato.

I think most schools of thought today are on Reddit, or Spotify in the case of Joe Rogan's following. Effective altruism is a movement. It's not a values neutral way of introducing people to ethical thinking in general. The effective altruists say, "We presuppose some things, and we're critical about it, but this is what we think."

I think part of my job as a professor – at least teaching what I teach – is to establish that kind of school of thought. I tell my students on day 1 that I'm an Aristotelian, I'm a Catholic, I'm a bit of a Thomist, and I'm a little bit Marxist. And I'm going to try to sketch out a worldview with respect to this question of work that I want you to critically engage with. I don't want you walking out here as miniature versions of me, but I want you to substantively engage with this positive view.

Graham Clay: I suspect I know your answer to this question already, but should professors teach the tools – teach the tech – or focus on the timeless general intellectual virtues or frameworks?

Paul Blaschko: I want to say I think that at least in higher education, but this might be for education generally, we're always teaching virtues or skills, and we're virtually never teaching tools, even if that's the explicit learning goal that we have in a class. The example that I think of is when I was in middle school, they taught me how to use Java and HTML to build a website. And by the time I was out of middle school, there was nobody in the world that was building websites that way. Now, you might think that was a waste of time, but I don't think so at all. I think what I was really learning was intellectual virtues like curiosity, confidence, and courage in approaching technical tools, technological tools – a trust in certain ways of doing things, methods of doing things.

I probably wouldn't teach a unit on how to use ChatGPT to generate good philosophical research. I might if I was doing a grad seminar, but even if I was teaching them, I would say, "Here's how you prompt the system. Here's what the system looks like. Here are the moves that you make in it." The thing I'm more interested in is imparting those sort of technologically focused virtues like curiosity and honesty, right? Because you can manipulate these things in good or bad ways. And I think you can teach those virtues through that very concrete process of showing them how to use those technologies.

Graham Clay: What's a non-AI tech tool that you use that you think professors should use more?

Paul Blaschko: I'm going to give you two. And the first one is obnoxious, maybe: physical space and writing implements. This is a technology that I do not think we pay enough attention to. When the ed tech people at Notre Dame come to me and they say, "Hey, what technology do you want to see in the newest classrooms that we're building?" I tell them comfortable chairs, tables, and whiteboards. Because a lot of our classrooms, even if they're packed with flat screens and smartboards and projectors, they've got chalkboards and they've got these chairs that you can barely sit in.

So much of my teaching process is active and engaged, and a limited classroom prevents me from walking around and listening to what the students are saying. They can't gather around a table and just, like, spit out ideas and brainstorm together. There's no place to put post-it notes. They've got to share one board, one wall, and they can't walk up to it. They can't even go to the bathroom because that would require everybody to move their backpacks. It's insane to me that we live in such advanced, technologically savvy world, et cetera, and we're still packing 40 people into a room that like, five people could sit in comfortably.

The non-obnoxious answer is ThinkerAnalytix. It is basically a critical thinking course developed by folks out of Harvard that you can build into your own course as a unit. What is phenomenal in my mind is that they've used technology in exactly the right way, given the educational process, because in order to learn critical thinking, you need to practice extensively and you need immediate feedback, and you need to do it on your own over and over and over. And there's no other critical thinking or logical system that I have found that's not thoroughly technologically integrated. It's a mastery-based system the students go through.

Graham Clay: Tell us: how has maintaining your social media presence on TikTok – or in general – affected you as a professor?

Paul Blaschko: There have been two benefits. First, it helps me connect with the concerns and goals that my students come into my class with. When you build a following, it's not a random sample, so it's people who are already interested in philosophy a bit, and they kind of want to know a little bit. And so to read the comments, to engage with people on the platform to see what they're really drawn to and what they're not is constantly surprising me. I'll do a video on Pascal's Wager, and I'll be like, “This is garbage. Like, who cares? This is such a stupid argument.” And it will go viral, and people will go nuts about it. I'll do something on Doxastic responsibility and think, “This is the most interesting thing I've ever talked about.” And three people will view it and unlike it, you know what I mean? It really helps me see where students are coming from.

Second, it also challenges me to communicate much more effectively and efficiently. If you pause in the wrong sort of way on a TikTok video, you've lost them. If you explain a distinction that's not necessary to grasp the core point, you're not really connecting with everybody in the room. As an educator, I use so many of the communication techniques that I've learned on TikTok in the classroom. My lectures are short five minutes, and then we need an activity, we need engagement, we need discussion.

There are also two drawbacks I will mention. First, it has completely ruined my attention span. It's intentionally designed as a platform to addict you to it and then to completely destroy your attention span. And, relatedly, it has taken so much time away from reading and activities that I enjoy. My children are suffering – just kidding. It gets songs in my head that are just ineliminable and just stupid, and I hate it.

The other drawback I'd say is that even apart from consuming TikToks, there is a way in which any gamified social media communication platform, there is a way in which being successful on any platform like that is going to instill habits that are not always best when considering your goal as an educator. If you're on Twitter all the time, you learn that inflammatory stuff gets you points. And so at conferences I've seen this in our profession, people have developed habits of saying inflammatory, half baked, stupid things because they know that gets people riled up.

There are times where I need my students to pay attention, and I need to very clearly, carefully, methodically think through a page of text with them. But the habits instilled by TikTok kind of push against that. Even as I'm lesson planning, I'm thinking, I'm going to read a whole page with them and stop after every few sentences. I'm like that's not fun. That's not exciting. That's not entertaining. They're not going to love this class, period. And so the ways that it seeps into your character can be pernicious, and I think overdoing it and becoming very successful across all these platforms and stuff would be pernicious. And it's one reason why, in my own case, I'll post for six months, then I take six months off, and then I'll produce content again for three months. I think it's really important to guard against.

Graham Clay: What is a claim you commonly hear asserted about AI – let's focus here on large language models – that you would bet $100 against?

Paul Blaschko: I hear this all the time. People say certain skills that we've relied on to justify the humanities are going to be made obsolete by AI. So, for instance, writing clearly. We'll often argue as philosophers or English professors or whatever, "Well, look, you've got to take these core classes because if you can't write clearly, you'll never get a job as a copy editor or a job preparing memos for your business. You're not going to be able to do that effectively and that sort of thing." Now, I do think it's true that AI like ChatGPT is going to eradicate the need for copy editors. It's going to eradicate the need for people who construct finely tuned business memos.

But I don't think that the kind of core idea that there are skills that the humanities imparts – virtues – is really challenged at all by AI. AI, instead of making it obsolete to be a good critical thinker and to carefully be able to craft arguments and pros, et cetera. I would say, "I'm not just going to bet $100, I'm going to double down and bet $200 that it's going to create more demand for these skills and it's going to create more demand for using these skills at a higher level than people are used to using them now.”

I was a copy editor as an intern in college. I got a lot of good skills from my writing classes that helped with that. And then Grammarly came along or spell check came along and it did that stuff for me. I was then able to actually read unsolicited manuscripts for substance and give feedback on it. That was so much better. Not having to have somebody in your organization read line by line and underline the misspelled words allows that person, if they have that core humanities skill, to do something that's better and more interesting and to do it faster and produce more of that. It's actually going to create more demand for those skills and demand for those skills that manifests in more interesting, more creative, better work for the people involved. That's my bet.

🔊 AutomatED’s Subscriber Referral Program

We're sure you're not the only one in your network intrigued by the rapidly evolving landscape of AI and its impact on higher education. From the optimists who foresee AI revolutionizing education by 2030 to the skeptics who view it as mere hype — and everyone in between — we welcome diverse perspectives.

Why not share AutomatED with them?

Once you've made two successful referrals, you get membership in the AutomatED tech and AI learning community. This platform is designed to thrive on the participation of engaged and informed members like you.

To make a referral, simply click the button below or copy and paste the provided link into an email. (Remember, if you don't see the referral section below, you'll need to subscribe and/or log in first).

Stay tuned, as we have more exciting referral rewards coming your way! From exclusive discounts on premium guides to access to additional curated content within our learning community, the perks of spreading the word about AutomatED are only set to grow. Share the knowledge, reap the rewards. The future of education is at your fingertips!

💭🗨️ 1-on-1 Consultations with the AutomatED Team

The purpose of AutomatED’s 1-on-1 consultations is to help professors (and others in higher education) with personalized guidance on integrating tech and AI.

Our default format is a one hour 1-on-1 consulting session on Zoom. Afterwards, we will provide you with a custom and actionable plan so that you are well-equipped to supercharge your fall courses.

Alternatively, if you're keen on exploring various possibilities, or considering a different consultation format (we offer group/team consultations as well), why not schedule a complimentary 15-minute exploratory Zoom call through Calendly? Just tap on the button below to set it up: