A Librarian's Take on Tech, AI, and Information Literacy

We talk to Patricia Sasser, music librarian at Furman University, about libraries' evolution and information literacy as we enter the AI era (or not).

[image created with Dall-E 2]

Welcome to AutomatED: the newsletter on how to teach better with tech.

Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

Let’s hear what librarian Patricia Sasser thinks about the technological evolution of the library, the teaching of information literacy with and without AI, a claim about AI she would bet $100 against, and many other topics.

The Interview

Graham Clay: Tell me who you are, and a bit about your role as a professor.

Patricia Sasser: I direct the Maxwell Music Library at Furman University, and I am also currently serving as the Associate Director of Outreach and Access Services for our University Libraries. I have been a professional librarian since 2009, and I've worked in a number of different contexts, but always within higher ed and libraries.

Graham Clay: And what is your training and background?

Patricia Sasser: I earned an undergraduate degree in music, and then I went on to earn a graduate degree in music history at the Peabody Conservatory of Johns Hopkins University. And during that period, I identified my interest in working in libraries and archives. I then earned a second graduate degree in library and information science. Although I have had responsibilities beyond it, music is the central focus of my work and is certainly the focus of my scholarly interests.

Graham Clay: As the director of a music library, what is one way that technology has fundamentally changed what you do compared to someone in a similar position in 1980?

Patricia Sasser: Music is, I think, a great paradigm for answering this question. If you think about listening to recorded music 40 years ago, your options were very structured. Besides the radio, people are primarily using LPs and cassette tapes to access music. So much of librarianship in that period was focused on access as the primary mode for thinking about information. The problem is always: "how to get access to what you need?"

That has been such a shift in librarianship and music. Once music became portable (CDs, boomboxes, the Walkman), things really start to shift. And now, of course, when we “collect” music what we are doing is paying for access to databases just like everyone else.

The transition from thinking of access as our primary challenge has been profound. Instead of searching for these rare and unusual recordings, our problem is the reverse. Now, we have an abundance. You might even say, a "surfeit" of information. And the issue is not so much access to that information, but rather navigating through it. So, I think that has been the primary professional shift.

There's so much out there. If you go to YouTube and you search for any piece of art music, most of the time you're going to find so many recordings. Just sorting through and choosing them is the problem. Whereas in 1980, with some exceptions, most music recording collections are going to be limited in options that they can give you.

Graham Clay: Right. It goes from access to the problem of sorting, categorizing, curating, or …?

Patricia Sasser: Yes, maybe ‘navigation’ is a good word. But I think we really have transitioned away from the question of access as the primary issue within the library and archives field. That's not completely true, of course. And I do want to say that I think that's a very Anglocentric way of thinking about information because, of course, in the Anglo-speaking world, access has always been an important value in terms of our relationship to information. But that's not true in areas of the world in which, for instance, preservation is the primary controlling value. You preserve by preventing access because if people don't use things, they're less likely to be destroyed. And there are places where there's not a culture of access and never has been, and that's not a value.

Graham Clay: When you're helping people who come to the library to access and navigate all that it can provide, what opportunities or risks does AI present, in terms of its ability to sort through information?

Patricia Sasser: ‘Information literacy’ is a phrase that librarianship, as a discipline, uses to describe this process. When students engage in the research process, we want them to be able to distinguish between information, misinformation, and disinformation within a specified domain. And let's define information in this context as accurate, factual data. Misinformation is data that is misapplied or misleading. And disinformation is information that's intended to deceive within a specified domain. When we say we want someone to be information literate, we're thinking on a spectrum — because it's not as if there's no further literacy to be acquired in this area — but and within that specified domain you would be able to distinguish between information, misinformation, and disinformation.

For students, of course, it's really a case of identifying experts. We can define an expert here as someone who has the greatest quantity of accurate factual data. And we could define a student as anyone who isn’t very literate on a topic. I’m very illiterate about chemistry and automobile repair — I would not be able to distinguish between information, misinformation, and disinformation. We all have very low literacy on many topics, and to be literate about everything would be impossible.

One thing AI can do is that it can help orient a student. It can help a student take those first steps towards literacy. In much the same way that tertiary sources (like encyclopedias) would have done that in the past. AI can quickly gather together many secondary sources on a topic and give you a good summary. And it can do that with a level of specificity that you might not always find in, say, a Wikipedia article.

Graham Clay: It can be more targeted in its response to the query or the person inquiring.

Patricia Sasser: Exactly. If you asked AI to give you a summary of politics in Sweden in the 19th century — there may or may not be a separate Wikipedia page on that, although there will certainly be one on politics in Sweden — AI can give you a very specific overview. That can help a student become oriented to this vast information landscape and give them ideas of things to look further into. It’s very useful in that regard.

❓ A Brief Poll Intermission

Would you be interested in...

Login or Subscribe to participate in polls.

The Interview Continues…

Graham Clay: Where do you see AI coming into the library in the experience of the library user? Would it be on a terminal in the library via which users can access some AI that’s related to the information that they could access through your library? Or could they access it from their home computer?

Patricia Sasser: If we think of research as something like a pyramid, where we’re moving from a broad topic to a focused area of inquiry, then AI could help you identify specific sources as you narrow that focus. If an AI tool were to include citations, I would see that as a really useful function. A student could then generate summaries that allow them to determine whether they wanted to look at these sources or not. And then you would access these via the university library.

Graham Clay: So, it plays a personalized guide role, in a way like a librarian does. They would be saying: "Hey, you should go look at this area…and x, y, and z are the sort of considerations you should take into account." And the AI can do that faster and maybe have access to more information than such a person would. A librarian might have blind spots or lack of generality in their awareness.

Patricia Sasser: That is certainly true for both librarians and teaching faculty. We know our own specializations deeply and our disciplines broadly. AI can allow you to make more helpful recommendations for research, even in an area outside of your own specialization. That’s very useful since no one — or very few people, I should say — have that level of command over the full literature in one discipline.

Graham Clay: What's the negative side of AI, from your perspective?

Patricia Sasser: One concern is currency. I occasionally collaborate with colleagues teaching business, and business is an area that depends on very recent information. But it is often very difficult to verify the age of data acquired through AI tools.

Another risk of AI is that it is often humanly impossible to separate information from disinformation. It used to be that information was controlled by restricting access (as in the former Soviet Union, for instance). Now that the information landscape has changed, the primary means of controlling information is not by restricting access but by flooding.

The journalist Peter Pomerantsev has a wonderful book about this called Nothing Is True and Everything is Possible. With an information flood, there is no way you can figure out what's correct because there's so much and it all seems equally implausible and absurd. Bots can use AI to produce incredible amounts of disinformation and there's no way a human can make sense of it. Flooding is the new censorship.

This is especially relevant for students because many of the questions we want to discuss in our classrooms (public health, ethics, politics and international affairs, to name just a few) are highly sensitive to this kind of disinformation flooding.

As we encourage our students to think more creatively about what sources of information there are, it becomes very difficult. It's one thing when a student shows you a source and says: "so and so claims this." It's another thing when it's generated from thousands of sites through AI, and there's no way you can really trace back to say: "oh, here's the claim, here's the evidence, here’s the counter evidence."

Graham Clay: What do students want to use AI for in your classes? And what should they use AI for in these contexts?

Patricia Sasser: Perhaps the ideal student is someone who's intellectually curious and closely engaged with every assignment. Then there’s the student with the checklist mindset — “this assignment is really a checklist of required things I have to do.” I think all of us are (or have been) somewhere on that spectrum.

The checklist mindset thinks "I've been told I need six sources for this paper, so I need AI to help me find six." Or: "This paper has to be four pages long, so I'll just keep regenerating the query for ChatGPT until I get four pages of plausible text." They're not even thinking on the meta level about what AI is doing. They're just trying to produce something to meet these needs. My sense is that — in my own experience with academic integrity — students use AI because they're trying to meet requirements that they feel ill prepared to meet otherwise or because they don't see the purpose. They see the assignments literally as just a list of requirements.

In the past, students have done this without AI tools. AI makes it a lot easier. One of the great things about AI is that it's making everyone in academia think very carefully about the goal of all these assignments. “What am I actually trying to get students to do? What do I want them to learn? What experiences do I want them to have with research, with this discipline? What knowledge should they acquire, and what is the right mode for acquiring that knowledge?" And I think laying bare a lot of the shortcomings of our traditional ways. Is, for instance, producing a five-page paper that synthesizes six sources really the most effective way to teach a student about a topic?

Graham Clay: So, some of the things that students want to use AI for are them just being lazy, maybe. Or maybe they're just trying to check a box for the class. Yet, in another sense, professors need to be sensitive to the concern some of those things aren't worth doing — students are right to worry about whether they should be tasked with doing these things. It's kind of like busy work or something that's not really useful or purposeful. And AI enables them to skip over the really "lame" part of the assignment.

Patricia Sasser: Right. Student musicians and athletes really understand this idea. There's no shortcut to practicing or training. You can't learn a piece the night before a concert or train for a race a few minutes before the event. But that’s not true with a lot of academic activities. Students learn — and AI makes it very easy to learn this, in some cases — that they can succeed with an assignment by just doing it one hour before. The assignments themselves might have taken much longer if done the traditional way. But AI shows them that it actually doesn’t matter how they did it. It's literally that useless.

If the knowledge gained is not iterative, if it not something they must have to move on the next aspect of this domain of knowledge…then it is hard to make a case that they shouldn’t use AI for these tasks.

Graham Clay: So, what do you think is a good use of AI in this kind of classroom context?

Patricia Sasser: One thing that I've been thinking a lot about how I might use AI to help cultivate students’ domain knowledge and their ability to distinguish between information, misinformation, and disinformation. So, one thing I was thinking of doing was having students in the classroom present queries to ChatGPT and then evaluate the response given. (I envisioned this with my junior students, so they'd be in their third year in the music history classroom and have had some semesters of music history.) I'd like to ask them about what they recognize as factual. Is there anything they think is not factual and on what basis? How do you know? I'd like to start with some very general topics in music history and then move towards some more specific questions and see what AI can tell us about those questions. The ultimate goal would be to understand where we come to the limits of our own knowledge and the limitations of this tool, as well as what that can tell us about its utility.

Graham Clay: What is a claim you hear commonly asserted about AI that you would bet $100 against?

Patricia Sasser: One thing I'll say that I think is different for music than other disciplines is that while modern music notation is machine readable, many of the sources that music makes use of are not machine readable.

And there is not yet a reliable tool for manuscripts, or handwritten text with annotations or marginalia. That is something the machine can't make sense of. My research focuses on the material culture of opera and ballet. I use nineteenth-century scores called ‘répétiteurs’ which were employed to run rehearsals. These are very rich sources of information. It's a manuscript score of an orchestral piece — the full score has been reduced to one line of music, usually for the violin. And then there will be directions about the music but also information about dance and the dancers in the vernacular with diagrams, names, etc. This is a very rich source of information, but a machine can't read it. It could possibly get the text but it couldn't understand the relationships. It wouldn’t understand that when it says “Oboe –Petipa” it's referring to a musical cue for the dancer. The oboe will make an entrance here and Lucien Petipa will know this is his cue. That’s a very complex amount of information for a machine to interpret.

Graham Clay: It sounds like you might bet $100 that AI is not going to reach some general state where it can do anything an expert human can do, like drive cars, with the really complex sensory modalities needed there. Do you think it's over hyped in its ability to do these kinds of specialized tasks?

Patricia Sasser: Maybe I would bet $100, but I don't really think of it in those terms. I'd be happy, in the case of handwriting or manuscripts, to be proved wrong on this. But I do think that the further you get into domain-specific knowledge and beyond machine-readable sources, AI has significant weaknesses. I would gladly give $100 to the person who makes handwriting machine-readable in the ways I have described. There is so much variance out there that I think it's hard to account for that.

Graham Clay: What if the AI turns out to be really good? What if the hype is real? What if it's equivalent to having the best teaching experts — like the best teaching professors in each area, or maybe industry leaders — all arrayed, ready to answer any of your personalized questions. That's what the AI defenders are envisioning. Would students still need to learn how to evaluate it just like they learn how to evaluate other sources of information, misinformation, and disinformation? How would that change the way you're thinking about it in the classroom context?

Patricia Sasser: Maybe I should end with this... Percy Shelley has a famous poem titled "Ozymandias." It's a wonderful poem about nineteenth-century excavations in ancient Egypt. This poem famously states: "Look on my works, ye mighty, and despair!"

[Editorial note: Here it is, if you’re like me (Graham) and don’t know poetry.]

I met a traveller from an antique land,
Who said — “Two vast and trunkless legs of stone
Stand in the desert… Near them, on the sand,
Half sunk a shattered visage lies, whose frown
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passion read
Which yet survive, stamped on these lifeless things,
The hand that mocked them and the heart that fed;
And on the pedestal, these words appear:
My name is Ozymandias, King of Kings;
Look on my works, ye mighty, and despair!
Nothing beside remains. Round the decay
Of that colossal Wreck, boundless and bare
The lone and level sands stretch far away.

“Ozymandias” by Percy Shelley

That's how I feel about so much technology. I do not doubt that AI has many applications, but I think those are going to be so different than what we can imagine that any prognostication about the future is ultimately going to seem very naive and perhaps foolish. I think it's another cryptocurrency, which was very much a case of look on my work, ye mighty, and despair. As Shelley says, nothing beside remains.

When a technology solves an old problem, it introduces a lot of new problems for us. The computer scientist Cal Newport has a wonderful analogy about information and cars. Cars are terrible for the environment and contribute to the isolation of our modern society. But no one's going back to horses, right? Cars are here. They solve a transportation problem, a really big problem. And AI solves some big problems. It's predicated on technologies that are solving real problems and addressing real needs we have.

But you also don't let a five-year-old drive a car. When we introduced cars, we had to introduce a whole new framework, including infrastructure and legislation. We've solved some problems and now we've created new ones, and we have to find ways to deal with those. I see AI like this. We don't really know yet what ultimate problems or new questions it has raised for us.

🔗 Links