Bored Stiff: A Cranky Historian on ChatGPT

      No Comments on Bored Stiff: A Cranky Historian on ChatGPT

Edward Dunsworth

Remember, not a game new under the sun
Everything you did has already been done
— Lauryn Hill, interpolating the book of Ecclesiastes

I’m not worried about ChatGPT.

Well, let me be more precise. I’m not worried about ChatGPT sparking a surge in undetectable student cheating, or writing better short stories than Alice Munro, or leading the Roombas and Alexas of the world into a great machine uprising that wipes out the human species.

These possibilities, all of which have been extensively gushed or fretted over, depending on one’s standpoint, are frankly preposterous. In order to get past the fairytales, I found it extremely helpful to develop a baseline understanding of what exactly ChatGPT is and how it works. I’m no software engineer, so I was pleasantly surprised to find that it wasn’t all that difficult to grasp the basics. Here’s my layperson’s attempt to describe how ChatGPT works in one sentence: using mountains upon mountains of text, ChatGPT produces answers to prompts by spitting out one word at a time, each individual word chosen simply because it was deemed by an algorithm to be the most logical word to follow in the sequence.

Don’t take my word for it. Take this guy’s:

ChatGPT is derivative. It’s a copycat, a cheat, a confidence man. It’s a biter, not a writer. Everything it does has already been done.

I’m bemused by the casual use of the term “artificial intelligence” to describe ChatGPT – or anything for that matter. Neither word of that phrase is even remotely appropriate. ChatGPT is certainly not “intelligent” in any meaningful sense. It is a computer program that does what it was programmed to do. And here is where the first word crumbles too. What, exactly, is artificial here? ChatGPT is the product of extensive and ongoing human labour – first of all the labour of the millions of actual writers whose work it has been fed, and next that of the vanguard of engineers and such who brought it into being and continue to develop it. Not to mention the extensive environmental resources that the operation of its enormous computing facilities requires. This all seems pretty real to me.

As countless critical commentators have pointed out, we’ve been easily caught up in the techno-hype of ChatGPT. Coming to terms with the reality of what ChatGPT is – and, importantly, what it is not – can help bring us back to ground.

So, no, I’m not worried about the defeat of human creativity. And as for the cheating thing: meh. In university classrooms, where we require a certain standard of evidence and specificity (not to mention accuracy!), I don’t see it as a big problem. As my fellow contributors to this series have pointed out, ChatGPT is not very good at any of these things.

While we’re on the subject, allow me to also say that ChatGPT is not even close to being the best cheating option available to students, especially when it comes to essays. The program cannot hold a candle to essays purchased from ghostwriters, who are, you know, actual human beings with actual intelligence and decision-making abilities. Essayists-for-hire can also, you know, cite a source and include a footnote. Yes, ChatGPT is free and widely accessible. But when tasked with university level assignments, it produces work in the range of F to C grades. Essay mills, by contrast, can provide essays written by PhDs with subject expertise, whose dishonest origins are virtually undetectable, or at least nearly impossible to prove. This vastly superior method of cheating, of course, costs money, giving access to it a strong class dimension. Rich kids get all the nice things.


To be clear, there are things worth worrying about when it comes to ChatGPT. But those are less about the purported magical, sentient qualities that the program is made out to have, and more about how ChatGPT and other forms of techno smoke-and-mirrors will be – and already is being – used as a tool of oppression and exploitation. As software engineer and tech critic Dwayne Monroe put it in an interview on Jacobin Radio: “[Writers and artists] are right to be concerned, not because [ChatGPT] has the ability to actually replace creative writing, but rather that it will be marketed as having that capacity.” There are volumes more to say here, but I think the key point that it’s not exactly the tool that’s the problem, but rather the powerful interests wielding it and the uses to which it’s put. The Luddites smashed looms and mills not because of some superstitious fear of technology, but because those machines were being used to dispossess and impoverish them.


I was supposed to use this post to respond to ChatGPT’s take on something from my area of historical expertise. Clearly, I’ve been procrastinating.

I did finally get around to asking ChatGPT a couple questions about Canadian immigration policy and about the history of temporary foreigner worker programs. I also asked it to write 300-word reflection papers on a few books by historians.

The responses were unsurprisingly bland, full of cliches of historical writing: everything is “complex” and “evolving” and the result of “multiple factors.”[1] The books were “thought-provoking,” “insightful,” and the products of “meticulous research.” (With many of the terms and turns of phrase repeated verbatim when describing different books). While the answers to the broader historical questions were fairly accurate (or as accurate as could be reasonably expected), ChatGPT’s reflections on the books were utterly incorrect.

It declared Allan Greer’s Property and Dispossession to be arguing the exact premise that the book is trying to refute: “Greer highlights how European notions of property, deeply rooted in individualism and exclusivity, clashed with Indigenous understandings of land as a communal resource. This clash of worldviews lies at the heart of the dispossession process….”

Moving on to Shirley Tillotson’s Give and Take, ChatGPT gallantly saved me from the grave misconception that the book is about taxation. In fact, the chatbot explains, it “traces the evolution of philanthropy from its early roots in religious and moral duty to its modern manifestations in the form of charitable foundations and corporate social responsibility.”

Historians, I hate to break it to you, but – much like our friends and family members – ChatGPT has not read our books.

While better in terms of accuracy, the answers to more general historical questions were just so bland, so mealy-mouthed, so namby-pamby. For example:

“The history of Canada’s Temporary Foreign Worker Program reflects the country’s efforts to manage labor shortages, stimulate economic growth, and adapt to changing economic and demographic trends. However, it has also highlighted the need for careful oversight and regulation to ensure that temporary foreign workers are treated fairly and that the program does not negatively impact Canadian workers and wages.”

Is it possible to vomit and nod off at the same time? It sounds like a student in tutorial who is terrified of saying the wrong thing. Like Succession’s Tom Wambsgans, flipping and flopping, debasing himself to please the evolving whims of his corporate masters.

As I did some more research into how ChatGPT works, the composition of these answers – and especially their blandness – began to make perfect sense. Drawing on so many billions (trillions? quadrillions?) of words and just picking the next one in the sequence, what ChatGPT produces when asked historical questions are, more or less, the answers of the “socially average” historical writer, to repurpose a concept of Karl Marx. So, basically, a ho-hum encyclopaedia entry with a twist of textbook authority and a dash of generic, corporate blog-style prose.

In short, as a writer, ChatGPT is boring. Mind-numbingly, face-stretchingly, hair-pullingly, tear-jerkingly boring.


For the last couple semesters, I’ve added a new wrinkle to my first-day-of-class plagiarism speech. I keep these comments brief because I am confident that the super-duper-majority of my students don’t cheat, won’t cheat, and have no interest in cheating. I mention the usual stuff: yes, I do catch people; yes, there are consequences.

But my new wrinkle, and where I put my emphasis now, is to just say that cheating is boring. Why not try something, even with the risk of failure (in fact, an extremely low risk in a humanities course)? Create something new. Put in some work. Be interesting people.

The extreme boring-ness of ChatGPT puts it in good company with its fellow cheats.

So, no, I’m not worried about ChatGPT. The confidence game only works if we trust the con artist. I suggest that we don’t.

Edward Dunsworth is a member of the Editorial Collective.

[1] Lest you think that I am casting aspersions, let me be clear that I too am guilty of employing these cliches, and not infrequently.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License. Blog posts published before October  28, 2018 are licensed with a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Canada License.

Please note: encourages comment and constructive discussion of our articles. We reserve the right to delete comments submitted under aliases, or that contain spam, harassment, or attacks on an individual.