Jim Clifford In the lead-up to my take-home exam last April, I was trying to think of questions ChatGPT could not answer. I hoped that by focusing on details from my lectures that are not available on Wikipedia and other similar online sources, the large language model would fail to provide a strong answer. I was dead wrong:
Edward Dunsworth Remember, not a game new under the sun Everything you did has already been done — Lauryn Hill, interpolating the book of Ecclesiastes I’m not worried about ChatGPT. Well, let me be more precise. I’m not worried about ChatGPT sparking a surge in undetectable student cheating, or writing better short stories than Alice Munro, or leading the Roombas… Read more »
by Carly Ciufo “Do museum workers do human rights work?” I ask ChatGPT. The artificial intelligence’s (AI) answer is longer and, honestly, more robust than I expect:
In this series, Active History editors are asking ChatGPT about their own areas of expertise and commenting on the process and answers. Sara Wilmshurst Unlike most of Active History’s editorial team, I’m currently neither a student nor an educator. I haven’t had to resist the temptation of assigning my work to artificial intelligence or had to bust students for succumbing… Read more »
You have probably heard about OpenAI’s ChatGPT, Microsoft’s Bing Chat or Google’s Bard. They are all based on Large Language Model (LLM) architectures that produce human-like text from user prompts. LLMs are not new, but they seem to have recently crossed a virtual threshold. Suddenly, artificial intelligence—or AI for short—is everywhere. While it is true that they sometimes “hallucinate,” producing factual errors and quirky responses, the accuracy and reliability of LLMs is improving exponentially. There is no escaping it: generative AI like ChatGPT is the future of information processing and analysis, and it will change the teaching and practice of history. Although some of its effects can be felt already, its long-term implications are not as clear.