Moral AI? My Interview With a Top Ethicist

Plus, Microsoft announces Azure AI Studio and I share your comments.

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I interview one of the world’s leading ethicists about moral AI, I provide some AI updates from Microsoft (including virtual agent rollouts), I share some reader mail (and poll results), and I open enrollment for those who missed our free Insights Series.

🗣️ Walter Sinnott-Armstrong on Moral AI

After hearing about the new book Moral AI: And How We Get There from Pelican Books, I knew I had to get it.

Here’s the abstract that drew me in:

A balanced and thought-provoking guide to all the big questions about AI and ethics

Can computers understand morality? Can they respect privacy? And what can we do to make AI safe and fair?

The artificial intelligence revolution has begun. Today, there are self-driving cars on our streets, autonomous weapons in our armies, robot surgeons in our hospitals – and AI's presence in our lives will only increase. Some see this as the dawn of a new era in innovation and ease; others are alarmed by its destructive potential. But one thing is clear: this is a technology like no other, one that raises profound questions about the very definitions of human intelligence and morality.

In Moral AI, world-renowned researchers in moral psychology, philosophy, and artificial intelligence — Jana Schaich Borg, Walter Sinnott-Armstrong and Vincent Conitzer — tackle these thorny issues head-on. Writing lucidly and calmly, they lay out the recent advances in this still nascent field, peeling away the exaggeration and misleading arguments. Instead, they offer clear examinations of the moral concerns at the heart of AI programs, from racial equity to personal privacy, fake news to autonomous weaponry. Ultimately, they argue that artificial intelligence can be built and used safely and ethically, but that its potential cannot be achieved without careful reflection on the values we wish to imbue it with. This is an essential primer for any thinking person.

I reached out to one of the three co-authors, Walter Sinnott-Armstrong, to get some insight into the book and his views in this space.

Right down the road from me at the University of North Carolina at Chapel Hill, Walter is Chauncey Stillman Distinguished Professor of Practical Ethics in the Kenan Institute for Ethics at Duke University (I should note that this is only one of his many titles, per the link above).

It’s rare that one of the world’s leading ethicists engages this deeply with AI, especially in an interdisciplinary fashion.

I am grateful that he agreed to be interviewed.

In this interview, Walter explains why an interdisciplinary approach is essential to grappling with the implications of AI, discusses how higher education should approach AI, advocates for team-taught courses for enhancing students' understanding of AI and moral AI, and much more.

So, without further ado, here is the interview…

Graham Clay: In Moral AI, you and your co-authors explore foundational questions about AI's intelligence and morality. Could you elaborate on how you define or explain 'moral AI' in the book?

Walter Sinnott-Armstrong: The definition of AI is controversial but not worth debating. Instead of arguing about what AI really is, we simply adopt an intentionally broad definition of AI as any machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments with sufficient reliability. (Our definition was modified from the U.S. National Artificial Intelligence Initiative Act of 2020.) Under this definition, AI is artificial because it is machine-based, and it is intelligence because it reliably achieves objectives or makes predictions or decisions. This definition is broad because it includes a lot of systems that others would not describe as AI, but that’s okay. We want to talk about these systems whether or not they really are AI according to other definitions.

The definition of morality is also controversial and has been debated among philosophers for centuries. However, the main concerns for moral AI are harms, rights, and fairness, so we can define moral judgments as judgments of which acts, people, or systems are good, bad, right, or wrong that are based on whether they cause harm, are unfair, or violate rights. Harms here include death, pain, and disability, and rights include rights to privacy and autonomy. More could be said, of course, but this simple definition captures what is important here.

Moral AI as a field then studies various uses of AI and asks whether and when they cause harm, violate rights, or are unfair in some way and whether there is any adequate justification for doing so. Our book discusses many cases in which AI creates these moral dangers.

Moral AI as a goal is to make AI systems morally better or permissible by preventing them from causing harm, violating rights, or being unfair without adequate justification. Our book discusses two novel ways to make AI moral, which is to achieve moral AI as a goal. One way involves building human morality into AI systems. The other involves building morality into AI businesses. Neither of these tasks is easy, but we lay out procedures that we can begin to develop now.

Graham Clay: Given the rise of semi-autonomous AI "agents," like Devin, which can perform complex tasks across various apps and tools with minimal human oversight, could you give us a brief gloss on the ethical frameworks developed in your book that should guide their development and deployment? 

Walter Sinnott-Armstrong: As I said, our ethical framework in the abstract is to avoid causing harm, being unfair, or violating rights without adequate justification. These rules apply to all uses of AI, including Devin, so Devin should be developed to avoid these kinds of moral wrongs unless justified by overriding benefits.

Avoiding these wrongs while developing and deploying an AI system like Devin will not be easy in practice. It will require sensitivity to moral dangers, motivation to do the morally right thing, and practical knowledge of how to avoid moral wrongs while still gaining as much as possible of the potential benefits of the AI. We propose five “Calls to Action” to inculcate these moral virtues in AI developers and deployers: scale moral AI technical tools, disseminate practices of moral AI, provide training opportunities, engage civic participation, and design agile public policies. The last chapter of our book describes these calls to action in detail.

Graham Clay: In Moral AI, you argue for a careful reflection on the values we wish to imbue in AI. In the context of higher education, how should administrators and educators balance the push for technological innovation against the need for ethical constraints?

Walter Sinnott-Armstrong: One primary goal of higher education is to prepare students for the world that they will live in after graduation. In that world, they will need to compete with other people who know how to use AI to improve the quantity and quality of their outputs. To give students a fighting chance in this post-graduation competition, administrators and educators should not forbid all uses of AI by students. That is hopeless and pointless. Instead, they need to find ways to help students learn to use AI properly and also to motivate students to want to use AI properly — that is, effectively and ethically.

Doing so need not undermine technological innovation. When a technological innovation helps many people without harming other people too much, it is not unethical. And when a technological innovation does harm other people or is unfair, we want its developers and deployers to look hard for another way to gain those benefits without those moral costs. Recognizing the importance of ethical constraints can then fuel a different kind of innovation by encouraging the search for better solutions. When someone judges that a technological innovation violates an ethical constraint, the reaction should not be to give up but instead to find a different innovation that does not violate ethical constraints.

Graham Clay: Do you or would you allow your students to use AI for papers or projects? What conditions need to be met for you to judge that AI use has pedagogical value (in general or in philosophy)?

Walter Sinnott-Armstrong: Forbidding all use of AI for papers and projects is not a viable solution. Students will find ways around these prohibitions and will lose respect for the rules. Instead, we need to find ways to enable students to use AI to learn even more than they would without AI and to learn skills that will benefit them after they graduate. A good teacher should adjust pedagogy to fit the real world.

One possibility is to ask students to develop their papers in stages using AI (such as an LLM) at some but not all stages. Students could get AI to write a paper on their chosen topic and then submit their prompt, the file written by AI, and an edited version where the student corrects the file with all changes marked. Alternatively, students could learn prompt engineering by asking an AI to write a paper in response to one prompt, revising that prompt in order to get the AI to write a better paper, and iterating this process until the paper meets the student’s standards—submitting all versions of the prompts and outcomes for the professor to grade. Yet another method is to ask students to specify a precise claim that they believe, but which they know to be controversial, and get an AI to write an argument for that claim and an argument against that claim, after which the student adds marginal comments evaluating both arguments. The goal of these assignments is to train students in critical thinking and to learn by using AI.

Graham Clay: Your collaboration with a game theorist and a neuroscientist highlights the benefits of interdisciplinary perspectives in understanding AI ethics. What specific interdisciplinary methods or content do you recommend incorporating into higher education curricula to enhance students' understanding of AI, moral AI, and/or the ethical use of AI?

Walter Sinnott-Armstrong: Regarding content on moral AI, I would recommend assigning our book, Moral AI. I hope that recommendation does not sound too arrogant, but the point of our book is precisely to incorporate interdisciplinary perspectives, enhance understanding of fundamental issues, and stimulate discussions among experts and students as well as in the general public.

More specifically, the content that I recommend can be organized around different harms that can be caused: (1) safety or risks of causing death and pain when AIs make mistakes in medicine, transportation, or the military, (2) losses of privacy and autonomy, especially when AIs track users’ locations and behaviors in ways that increase costs to those users of doing what they want to do, and (3) unfairness or injustice both in distribution and procedure, especially when AI is used in law and businesses (such as hiring, salaries, and promotion). Other topics in moral AI can be seen as ways to reduce these harms: (4) assigning responsibility and liability to compensation and punishment, (5) building human moral judgments into AI systems, and (6) training AI designers and distributors in moral judgment and motivation. These topics are discussed by many other authors, but they can be illuminated by framing them in light of a moral theory that emphasizes causing and reducing identifiable harms.

All of these topics require interdisciplinary perspectives. To pick just one example, one cannot discuss whether AIs used in criminal are just without including perspectives from philosophy on what justice is, from sociology and economics about inequalities in our societies, from computer science about how these AIs work (and whether they are interpretable or explainable), and from political science and law about which procedures and regulations are needed, and from psychology and neuroscience about people’s attitudes towards using AI in law. We all need to work together on these complex issues.

Regarding methods for enabling and facilitating such interdisciplinary collaborations, I recommend team-teaching. I do not mean courses where one professor teaches the first half and another professor teaches the second half. Instead, each professor should be actively involved in each class meeting. Their interactions in class can promote mutual understanding between the team-teachers and with their students.

The biggest problem that I have found in arranging such team-taught courses is in getting permission from department chairs and deans, who often want more basic disciplinary courses to be taught by a single professor. Truly interdisciplinary team-taught courses clearly benefit students but cannot happen without support from administrators. Duke University does this better than any place I know, and that is one reason why I moved to Duke. Other universities should follow Duke’s example.

Graham Clay: What was the most surprising or challenging insight you encountered while writing Moral AI? Why did it surprise or challenge you?

Walter Sinnott-Armstrong: This question is difficult, because I encountered so many surprising or challenging insights. If I had to pick only one as the most surprising or challenging, I would mention the many ways in which AI itself can be useful in countering moral problems with AI. AI in social media threatens privacy, but new algorithms can help to reduce threats to privacy. AI can lead to unfair decisions in criminal courts, but other AIs can monitor and correct for unfairness criminal courts that use AI. In these cases and many more (though not all other cases), the solution to immoral AI is moral AI rather than no AI.

Graham Clay: How much of the book did you write with ChatGPT, Gemini, or Claude? Be honest!

Walter Sinnott-Armstrong: We did not write any of our book with ChatGPT, Gemini, or Claude. We did consider adding a poem written by ChatGPT as an example—with citation, of course—but we decided against it only because the poem did not illustrate the point we wanted to make. In any case, the problem lies not with using ChatGPT, Gemini, or Claude, but rather using them without citation. If publishers, readers, and everyone else know that a book was written by ChatGPT, Gemini, or Claude, then it is up to them whether they want to buy and read it. Deception is the problem.

Thanks to Walter for taking the time for this interview! I am definitely going to assign Moral AI in my next “philosophy of AI” class!

You can get the book on Amazon here, preorder it on Indigo here, or use Penguin’s “Shop Local” feature to find it at a store near you here.

📢 News of the Week:
Microsoft Build’s AI Updates

Speaking of AI agents, the topic arose several times at Microsoft Build 2024. Comparable to Google I/O, which I covered last week, this event is aimed at developers who build, develop with, or deploy Microsoft software across various platforms.

From Microsoft’s announcements at and after the event, it is clear that they are continuing to invest heavily in the education market, just as OpenAI — their partner — independently releases ChatGPT Edu (more on that later this month).

Although there weren’t as many updates as there were from Google at I/O, there were some big ones from Microsoft that are notable for (higher) educators. Here they are:

Team Copilot

Team Copilot is like Copilot for Microsoft 365 — Microsoft’s integration of LLMs in the 365 suite, distinct from their standalone chatbot (“Microsoft Copilot”) — but embodied as a member of your department or team. Rather than have impersonal interfaces in various apps where you interact with LLMs that have continuity in behavior simply because your files and work have continuity, Team Copilot acts as an unified agent or “valuable team member.” 

  • Why it matters: Available in Microsoft Teams, Loop, and Planner, Team Copilot can significantly improve the efficiency of departmental meetings, effectively multiplying the abilities of administrative staff (or, as the case may be, creating administrative staff where there were none before). It can help ensure that projects such as curriculum development or research collaborations run smoothly, with clear communication and task management. This allows us to focus more on our core responsibilities while organizational tasks are handled efficiently.

Azure AI Studio

Azure AI Studio, just made broadly available, is Microsoft’s version of Google’s AI Studio. Or, put differently, it is like OpenAI’s custom GPT builder, but more advanced, with much more control, and securely positioned within the Microsoft ecosystem. You can pick and choose different LLMs as your generative AI’s engine (including GPT-4o), analyze its performance, fine-tune it on a curated data set, link it to other software, and more (here’s a video summarizing what it can do).

  • Why it matters: In short, this platform allows educators whose institutions have added Azure AI Studio to develop more advanced AI applications tailored to their specific needs. For instance, professors can create more sophisticated AI tutors that are more specialized (i.e. capable of producing quality outputs on advanced content in their fields) and that are easier to evaluate and test.

Copilot Studio

Copilot Studio is like Team Copilot, but you will be able to build the agent to your needs. Previously, Copilot Studio was Microsoft’s version of OpenAI’s custom GPT builder. Now it is getting an agential makeover. That is, you can build custom Copilots to act as agents that serve whatever purpose you need within your or your organization’s Microsoft 365 ecosystem, leveraging all that you do in 365.

Partnership with Khan Academy

Microsoft and Khan Academy have partnered to provide free access to Khanmigo for Teachers, an AI-powered teaching assistant, to all K-12 U.S. educators. This collaboration also aims to improve math tutoring using Microsoft’s Phi-3 small language models (SLMs), which I reported on in April, and integrate more Khan Academy content into Microsoft Copilot and Teams for Education​​.

  • Why it matters: It remains to be seen how this affects higher education, but it is a promising development on two fronts. First, it might be a precursor to Microsoft releasing a more advanced version of Khanmigo for universities, like for core university courses (e.g. Calculus I) that aren’t radically different from high school courses. Second, it is promising that Microsoft is heavily emphasizing ways in which their deal with Khan Academy will help them improve their approach to AI and education more generally, as well as specific products like Teams. (By the way, if you are a K12 educator, see the summer game “Leaps and Logs” released by Microsoft, as well as Microsoft’s generative AI primers for K12 educators.)

Visual Studio Code for Education

Microsoft has launched Visual Studio Code for Education, a free and accessible online platform built on AI designed for teaching basic computer science. This platform includes an integrated curriculum and a sandbox coding environment that requires zero setup, making it useful for both students and educators across various devices and platforms​​. (Relatedly, Microsoft announced that they are partnering with Cognition, creators of the aforementioned Devin, so software development agents are clearly going to be a point of emphasis in the near future.)

  • Why it matters: Visual Studio Code for Education offers tools used by professionals, allowing students to learn languages like Python, CSS, and JavaScript in a real-world context. It provides interactive courses and coding challenges that engage students in practical learning experiences. The platform’s built-in accessibility features ensure that all students, including those with disabilities, can learn to code effectively. As the fall semester approaches, educators teaching the fundamentals of computer science can utilize this platform to enhance their curriculum and provide students with valuable, hands-on coding experience​.

Which Microsoft updates from Build are you most excited about?

Login or Subscribe to participate in polls.

🧰 Enroll in Our AI and Higher Ed
Primer Email Series

What’s the difference between Copilot Pro and Copilot for Microsoft 365?

What is zero-shot prompting?

What is “pairing” and why does it discourage AI misuse by students?

What are two ways to use AI for feedback and grading that carry no privacy, data, or — arguably — moral risks?

If you don’t know the answers to these questions, you may want to enroll in our “Insights Series.”

This is a 7-email sequence for new subscribers that conveys who we are and some crucial information that we have learned and written about in the past year.

Like AutomatED, it is designed for professors and learning professionals who want actionable but thoughtful ideas for navigating the new technology and AI environment — and it will get you up to speed with AutomatED, too. (These emails come in a sequence, with delays, on Sundays, Tuesdays, Wednesdays, or Thursdays.)

Every new subscriber to AutomatED is now enrolled in it by default, but you may have signed up before we released it… Sorry about that!

Still, you can enroll yourself manually by clicking “Yes” below. You can always unenroll starting in the second installment without unenrolling from AutomatED’s weekly newsletters.

Would you like to be enrolled in the AutomatED Insights Series?

Login or Subscribe to participate in polls.

✨Recent and Upcoming Premium Posts

June - Tutorial: Easy Student Consent Management in Microsoft 365

June - Guide: How to Train Students to Use AI

✉️ What You, Our Subscribers, are Saying

Which Google updates from I/O are you most excited about?

Are you considering using custom GPTs now that students won't have to pay to use them?

“I believe that educators are obligated to both learn how to use AI and to teach their students about AI and how to use it for educational purposes. ”

An Anonymous Subscriber

“I agree the monthly charge was the biggest impediment to using Custom GPTs in an educational environment.”

An Anonymous Subscriber

Thanks for sharing your comments! Feel free to respond to any poll or any newsletter to reach me and potentially have your comment included in a newsletter.

Graham

Expand your pedagogy and teaching toolkit further with ✨Premium, or reach out for a consultation if you have unique needs.

Let's transform learning together.

Feel free to connect on LinkedIN, too!

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.