Relevance and Resistance: Steering a Critical Course on AI

Mack Penner and Edward Dunsworth

In his case for “steering a middle course” on the use of artificial intelligence (AI) in the history classroom, written partially as response to earlier pieces by each of us, Mark Humphries makes a number of points with which we agree. First among those points of agreement are the value of a historical education and the skills that such an education develops in students. We agree, also, that a certain media literacy and technological capacity are important skills not just for our students but for us as historians, too – and that developing those skills can be an important pedagogical goal in our classrooms. We disagree, however, with a number of Humphries’s other arguments in favour of the AI middle course, and finding those disagreements both significant and worthy of reply, we want to further the discussion here.

Among Humphries’s key arguments is one about relevance: to reject AI is to “retreat into a purist position that is likely to make us irrelevant” in the ongoing discussion about AI implementation, he claims. But to reject AI is not to ignore it, and neither is it to vacate the field of discussion. We have no interest whatsoever in pretending that AI doesn’t exist, as Humphries implies that we do. On the contrary, from a place of intense concern for how AI might warp our discipline (not to mention our world more generally), and diminish the intellectual development of us and our students alike, our position is critical rather than ignorant.

There is something unsettling about Humphries’ arguments for relevance. He seems to suggest that only by falling in line behind the ascendant power of AI can historians have any effect whatsoever in the classroom. Resistance is futile. “This is the world in which we and our students must live. So how can we simultaneously reject AI while also claiming to prepare students to live, work, and think critically in such a world?” Humphries asks. We don’t agree that teachers’ ability to reach their students is dependent on the use of the technology du jour. (Neither do we accept that this is the world in which we must live, but more on that later). Both of us attended university after the take-off of personal computers and the Internet and had professors who integrated those technologies minimally – or not at all – in their teaching. In our own experiences as students, the use or non-use of technology had absolutely no correlation with the quality of instruction. We strongly suspect that this observation rings true to many readers. Universities offer students a wide-range of pedagogical approaches that may or may not inform their future paths. Some professors are embracing AI, while others are rejecting it. But surely even AI optimists can recognize the value to students in this pedagogical diversity.

Our resistance to AI does not mean that we turn away. We have to be realistic, as Humphries makes clear, and realize that many if not most of our students are actively using AI in some capacity, while virtually all of our students are passively encountering it in the course of the work that they do in the university. But that realism should extend a step further, to a realization of what that usage really looks like. Reportage on this issue makes it plain that when they use AI, students are using it as a shortcut, a time-saver, a work-reducer. They are not, by and large, using AI in considerate or minimal ways to augment their thinking and writing. AI is attractive because it can help a student to write, in two hours, an essay that ought to take two days. But as Dunsworth’s piece makes clear, the outcome of the essay is far less important than the process of thinking about it, researching it, and writing it. Generative AI, if we’re being realistic about the way it’s used, obliterates that process.

This being the case, to preserve the important parts of a historical education it is plainly incumbent on history teachers to adapt. But we see that adaptation differently than Humphries. Rather than abandoning the research paper, to take up one of his examples, we should be thinking about how assignments like research papers can be made to work while AI is readily available to our students. After all, the research paper is the historical assignment par excellence and for very good reason. Research papers, unlike in-class exams, are an opportunity for free intellectual adventure undertaken beyond the implied surveillance of a testing centre or an invigilated lecture hall. That is, they are geared precisely towards the key processes of a historical education, the thinking and the writing that, done repeatedly over the course of a degree, tend to produce graduates more than ready to live, work, and think in the world beyond the university.

Some of this adaptive work on our part can indeed take the form of syllabic innovation, tweaks and course policy changes. We might also have a crack at persuasion. Humphries channels a hypothetical student who asks rhetorically: “if AI is so terrible, why is it embedded in all the things I am required to use to complete my degree?” What if, faced by such a question, we actually tried to answer it?

Such an answer might bring us to topics like the function of hype within the history of capitalism (speculative bubbles, anyone? Immigration propaganda? Gold rushes?). It might also compel us to articulate to students why we assign complex research and writing projects, what we want students to get out of them, and how the endeavour might benefit them in their future lives, even far away from the ivory tower. We begrudgingly agree with more moderate colleagues who have suggested that one benefit of the AI boom is that it might force just such a back-to-basics turn among teachers at all levels.[1]

On the subject of hype, we feel compelled to push back against Humphries’ propagation of industry narratives about AI. “Whether AI can actually reason,” Humphries writes, “is not a settled issue among those researchers who specialize in such matters.” It’s a minor comment – an aside really – within the overall post, but a revealing one, and one that demands rebuttal. Even to present such a thing as a possibility — that a bunch of programmers have created a synthetic, sentient force with the ability to reason – is an extraordinary claim. But as Carl Sagan said, “Extraordinary claims require extraordinary evidence.” What evidence does Humphries provide to support this claim? He links to a paper written by researchers employed by Anthropic, a private AI company. This is akin to citing a paper by Exxon Mobil in a debate about climate change.

It’s easy to be seduced by generative “AI” computer programs. As Emily M. Bender and Alex Hanna point out, it is very hard for humans to be confronted with human-like language and not imagine a real person (or something very much like a real person) behind it. Cognitive scientists have demonstrated that language is a fundamentally social phenomenon. But the human-like language produced by LLMs is just that: human-like. It is not the product of a sentient entity spread mysteriously across data centres, hard drives, and processors. Of course, there is reasoning embedded in generative AI systems. But it is the reasoning of the human beings who have built the software. The programs do not themselves reason.

To Humphries and many others, the hegemony of generative AI seems inevitable. But one of the biggest lessons historians try to impart on our students is the contingency of the past. At the scale of human societies, almost nothing is inevitable. Social and political orders, environments, technology, culture, and so much else are the result of past actions and accidents, conditioned by the circumstances of the day. Things weren’t always the way they are now, and our expectations for the future, as well-informed as they may be, will not necessarily match future reality.

So as we face the uncertain and up-for-grabs future, we have no problem agreeing to disagree with Humphries and other AI optimists. (Indeed, we are invigorated by some good old-fashioned academic debate). But we choose to reject the inevitability discourse which suggests that we must live in a generative AI world. Instead, we want to insist on the possibility of a human-centred future, even if we find the AI hype machine to be a formidable obstacle in its pursuit. As teachers, we want to model for students the refusal of this hype – ubiquitous though it is – and show them all that they have to gain by leaning into difficult intellectual work.

Mack Penner is a postdoctoral fellow in the Department of History at the University of Calgary. Edward Dunsworth is an associate professor in the Department of History and Classical Studies at McGill University and a member of Active History’s editorial collective.


[1] See, for example, the comments of Kevin Gannon and Johann Neem on this podcast episode.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License. Blog posts published before October  28, 2018 are licensed with a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Canada License.

Please note: ActiveHistory.ca encourages comment and constructive discussion of our articles. We reserve the right to delete comments submitted under aliases, or that contain spam, harassment, or attacks on an individual.