Steering a Middle Course on AI in the History Classroom

By Mark Humphries

In the last few months, there has been a growing debate about how historians should respond to AI. And that’s a good thing. I’ve argued that we need to engage with the technology or risk becoming irrelevant. Recent pieces in Active History by Mack Penner and Edward Dunsworth make the case  for why we should approach AI with caution and stand-up to resist its use in historical practice and teaching.

One thing on which we can all agree is that teaching critical thinking is essential—probably more so now than ever before—and that higher education generally, and history as a discipline specifically, play essential roles in that regard. I agree with Dunsworth too that it would be wrong to either throw our hands up in surrender to the machines or embrace AI as a panacea. Both will surely lead to the destruction of history and the university as we know them. I would argue, though, that the question of how to respond to AI—especially in the classroom—remains very much unresolved.

Dunsworth argues for resistance, reaffirming the intrinsic value of deliberative human thought and mindful writing by embracing the traditional, tactile, and analog. While I agree that critical thinking and engagement are essential, I don’t believe rejecting AI is a viable way to uphold those values without ultimately distorting them into something unrecognizable. Looking at issues from a variety of perspectives, so long as they are grounded in evidence is, after all, the essence of critical thinking. If we deny that generative AI can be useful at least in some circumstances—or worse, pretend it doesn’t exist and that our students don’t have to contend with it—we simply aren’t being true to the evidence.

I have always been a pretty traditional historian which is why I felt I needed to learn about AI after I first encountered ChatGPT in late 2022: it worried me and I knew I did not know enough about it to understand its implications. What I have tried to do since is to find out what it can and cannot do for historians right now and to keep abreast of how its capabilities may evolve over time. I also try and tell other people about what I’ve found.

The hard truth is that while it was wonky at first, generative AI has evolved faster than any other technology in my lifetime. Whether AI can actually reason—and for the record that is not a settled issue amongst those researchers who specialize in such matters—is entirely immaterial to the fact that it can clearly do a lot of practical, useful things which is why most people are using it. If you’re still a skeptic, read about vibe coding and how AI is changing software development, how scientists are using GenAI to make novel discoveries, or how it is being used in hospitals to improve patient outcomes. The same process is starting to play out in knowledge work too and there is growing evidence that it is already reshaping the entry level job market. This is the world in which we and our students must live.

So how can we simultaneously reject AI while also claiming to prepare students to live, work, and think critically in such a world? If we want our students to take us seriously and learn some of the things we are trying to teach them about critical thinking, they need to be able to trust that we are honest brokers who offer knowledge and ideas that are grounded in reality. I don’t think I would want to argue that while doctors can use AI to help improve diagnostics and admissions decisions, a history student can’t use it to strengthen the wording of their thesis statement or help understand obscure terminology in a primary source.

Even if you wanted to make such an argument, try this thought experiment: is it possible for a student to do research today without bumping into AI? Most people start with a Google search, which relies on knowledge graphs and embeddings (both are forms of non-generative AI) to find and rank results. It also provides generative-AI answers which are sometimes useful but right now, often quite bad. When you go to the library catalog, if you are at a big American school you may already have an AI powered library search engine. If not, when you lookup articles in JSTOR or EBSCO (to name but a few article repositories), you’ll be confronted by AI generated summaries and suggestions for further research. Even if you ignore those, perhaps insisting on open-source repositories, when you do download a journal article to Adobe, that program will (annoyingly) offer to use generative AI to summarize the document or make notes. Finally, when you open Word or Google Docs, those programs too now offer to write your document for you with AI. The point is that if we wanted to ban AI entirely, it stretches our credibility to pretend it doesn’t exist and that students won’t be confronted by it every step of the way. A student in this scenario might logically ask themselves: if AI is so terrible, why is it embedded in all the things I am required to use to complete my degree?

Although I am sympathetic to the sentiments expressed by my colleagues, and I admire their willingness to defend our discipline, I don’t think resistance is a viable option. But I don’t think prohibition and active resistance are necessary to preserve the values intrinsic to historical inquiry. Instead, what I suggest is that we try to steer a middle course. This means accepting that AI can do some useful things for historians (transcription, translation, summarization, editing, and indexing, amongst others), but that it is not a substitute for deep knowledge and hard work. Most of all, it means understanding that our students need to leave our classrooms knowing when to use AI, when to avoid it, and how to get the most out of it.

Conveniently for us, the skills one needs to do this look a lot like the skills history has always offered aspiring lawyers, teachers, politicians, policy analysts and other knowledge workers. Certainly we have to make some tweaks around the edges: going forward, research papers may be less valuable forms of assessment than in-class testing. I also agree with Dunsworth that we need to double-down on teaching “the value of thinking—laboured, painful, frustrating thinking”. But why does that have to mean that we can’t use AI to transcribe a handwritten document we’ve thoroughly read but want to full-text search? How about reformatting footnotes from Chicago Style into APA for an interdisciplinary journal? Or converting our footnotes into bibliographic format? AI use and deep thinking don’t have to be mutually exclusive. In fact, done right, the former might save some time for the latter.

As historians, I think we can choose to help shape how these tools are used in the world—which means making some compromises—or we can retreat into a purist position that is likely to make us irrelevant to those discussions. In my view, neither uncritical adoption nor absolute resistance will serve our students well. What they need from us is what we’ve always provided: the ability to think critically about sources, to construct evidence-based arguments, and to navigate complex information. These skills are more valuable now than ever. Our job is to help students develop them through engagement with the tools they’ll be expected to use, not by pretending those tools don’t exist.

Mark Humphries is a professor of history at Wilfrid Laurier University in Waterloo Ontario. His current research focuses on applying the use of artificial intelligence, specifically generative AI, to historical practice to understand how knowledge work is likely to evolve and change over time. 

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License. Blog posts published before October  28, 2018 are licensed with a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Canada License.

Please note: ActiveHistory.ca encourages comment and constructive discussion of our articles. We reserve the right to delete comments submitted under aliases, or that contain spam, harassment, or attacks on an individual.