Assessing Critical Reading Assessments at Huron University College

Students sitting at tables in a library.

Students in the library of a British Columbia high school, 1930-1960. Library and Archives Canada. MIKAN 4369768

Geoff Read, Tom Peace, and Tim Compeau

As the most recent professors in Huron University College’s signature first-year course, History 1801E, “Controversies in Global History,” we have struggled for several years with an issue that appears to plague university instructors far and wide: many of our students are not doing the readings for their weekly tutorials. This poses quite a problem since the premise of the tutorials is that through discussion of the readings, students will learn how to identify and assess arguments, particularly through the critical evaluation of the historical evidence upon which they are based. Students who do not do the readings for the tutorials, therefore, not only cannot participate in, or contribute to, the discussion, but actually cannot even follow the course of the conversation. They essentially learn nothing in the process.

So what to do? We increased the participation grade to 15% of the final mark to emphasize that we valued this component of the course. This had no apparent effect. We incorporated student-led discussions hoping that class members would feel obliged to help each other out by doing the readings thereby enabling them to answer each other’s questions. Again: this had at most a negligible impact on students’ reading and participation. For a few years we instituted content-based quizzes at the start of each tutorial. This made some difference but was labour-intensive for the professors and encouraged the kind of rote-learning that was at odds with our desire to encourage students to think of History as more than just the memorization of facts.

Then in 2016-17, following the Historians Teaching History Conference at Mount Royal University, we tried a new approach, requiring the students to fill out a critical reading assessment form available below for every tutorial where a reading was discussed. This assessment would then count for half the participation grade each applicable week. We hoped to convey several messages with this mechanism.

First, we wanted our expectations to be clear – we require students to come to class prepared, having done and reflected upon the assigned reading in a rigorous way.

Second, we hoped that by encouraging students to prepare properly, we would not only ensure that a critical mass of them would do the reading, but that they would be ready to discuss it at a relatively sophisticated level.

Third, we designed the forms to reinforce our in-class teachings. The form asks students to identify the thesis, the sources on which the argument is based, the author(s)’ position in the historiography, connections to other class materials, and three strengths and three weaknesses of the argument. Further, the form requires students to explain why, or why not, they found the argument convincing.

Fourth, we hypothesized that part of the culture of not preparing properly for classes was a general sense students had of disengagement from the course. Accordingly, we hoped that the continual evaluation and feedback provided on the assessments would be one means of keeping students engaged in the class material.

A fifth benefit of the assessment forms was not part of our initial motivation but is worth mentioning: completed and graded assessments provide excellent study materials for students as they prepare for the course’s tests and final exam.

So have the critical reading assessments been effective? Have they met our four main objectives?

We should be clear that there has been no rigorous test applied to measure this. We didn’t run a study to test student-learning, for example, before and after implementing the assessments. That said, the anecdotal evidence we see in our classrooms suggests the measure has been at least a partial success. Certainly, it seems a given that these assessments help to make our expectations as clear as possible with regard to tutorial participation. If students overlook the blurb in the syllabus that outlines these, and also miss the instruction given in both the introductory lecture and introductory tutorials of the year to this effect, then surely these assessments send a message about what they need to do to prepare for class. Moreover, the results have been encouraging. More students do the reading; class discussions are more substantial; and student engagement in the class does seem better. Tests and exams, additionally, seem to confirm that the students are having greater success at mastering the material and at developing their critical thinking and reading skills.

So pleased are we with how the experiment has gone that we have begun to implement these assessments in other classes. In the summer of 2017, for example, Tim adapted the critical reading assessments for his second-year online American survey course, History 2301E at Western University. In previous years, his attempt at replicating the tutorial experience online using the forum feature of the university’s course platform proved disappointing. With summer jobs and other distractions, students routinely skipped the discussion component where they were challenged to post and answer questions about the articles much as the students do in class. The introduction of the critical assessment sheet, weighted as a separate weekly assignment (10 sheets at 2% each), was accompanied by a significant improvement in the forum discussions over the twelve weeks of the course.

The impact of this weekly drill, carried out within such a short time frame, was also evident in student essays, especially for non-history students taking the course as an elective or for an essay-course requirement. Students with little or no experience with the demands of history essays received a crash course through these sheets and seemed to gain a clearer idea of how to interrogate and write about the books and articles they encountered in their own research. As with the in-class assessments, the online version caused a significant increase in the professor’s workload, and with only a single year in place it is too soon make any concrete claims as to their effectiveness. Nonetheless, the early evidence is promising.

However, this modest success story comes with a proviso. The positive effects of the assessments in tutorials are most obvious in the first halves of our courses, when we would estimate that somewhere between 80-90% of the students complete them and come to class better prepared accordingly. This is indeed a marked improvement on earlier years, and has positive effects in all four areas outlined above. But in 1801E, a full-year course, in the second half two discouraging trends emerge.

One sees an exodus of weaker or less-engaged students from the class. One possible explanation for this is that the burden of doing the assessments helps put them to flight. Another more troubling possibility is that once students fall behind on their assessments this helps create a feeling of hopelessness on their part wherein they feel they cannot possibly catch up in the class and give up.

A second negative trend is that by February and March, when essay-writing season hits, a dramatic drop-off in students doing the assessments, and therefore presumably the readings, as well as in the quality of class discussions takes place. This is entirely consonant with patterns that existed prior to our having implemented the critical reading assessments and suggests that the positive effects of the assessments are real but limited in both time and scope.

A third downside to the critical reading assessments is of course that like the content quizzes we experimented with previously, they create quite a bit of work for the instructors. Instead of heading back to our offices and quickly recording participation grades for the day for each member of the tutorials, for example, we must now spend roughly 3-5 minutes per assessment to go over them, ensure they are substantive, provide some constructive feedback, and record the grade.

The critical question for us, then, as instructors is whether or not when we weigh the positives against the negatives and factor in the extra work they create the assessments are worth the effort and cost. We are united in believing they are. As with so many assignments and pedagogical strategies, the payoffs of the critical reading assessments are admittedly greatest for those students who are fully engaged with the class. The best students, in short, remain the best students and take maximum advantage of the instruction we provide, including these assessments. But the improvements we see in class discussion and student engagement and performance combined with the fact that most students do the assessments most of the time, albeit with the drop off towards the end of the year, suggest to us that this is a strategy and exercise worth continuing.

So for next year we will be keeping the assessments and also incorporating some new strategies to try to encourage student learning in the lectures. Perhaps in March 2019 we can update activehistory.ca readers on how it all turns out.

About the authors

Geoff Read is Associate Professor of History at Huron University College. He publishes primarily on gender in modern France and the French Empire.

Tim Compeau is an Assistant Professor of History at Huron University College. He researches public history and colonial North America.

Tom Peace is an Assistant Professor of History at Huron University College and editor at ActiveHistory.ca. His research focuses on settler colonialism, literacies, and education in eighteenth- and nineteenth-century North America.


History 1801E: Critical Reading Assessment

Author ______________________________________________________________

Title________________________________________________________________

Title of Publication (eg: William and Mary Quarterly) ____________________________________________________________________

Date Published _______________________________________________________

  1. What is the central argument of this reading?
  2. What evidence is provided to support the argument?
  3. What other historians are discussed? How are their arguments or positions on the topic different?
  4. How does this reading connect with the textbook and lectures? Does it complicate or challenge the other narratives we have examined?
  5. List here the article’s strengths and weaknesses:
Weakness Strengths
1.  1.
2.  2.
3.  3.

6. Are you convinced by the argument? Why or why not?


This post is part of the ongoing Beyond the Lecture: Innovations in Teaching Canadian History series edited by Andrea Eidinger and Krista McCracken. Inquiries, proposals, and submissions can be sent to the editors via unwrittenhistories [at]gmail[dot]com.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License. Blog posts published before October  28, 2018 are licensed with a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Canada License.

11 thoughts on “Assessing Critical Reading Assessments at Huron University College

  1. Adam Chapnick

    There’s a lot of terrific advice here. I’d note, however, that 15% for participation probably still isn’t enough to convince many students (or even to indicate to them) that preparing for seminars / tutorials should be a priority. [If I have an essay coming up worth, say 25% of my final grade, and a single seminar / tutorial worth 1.5%, the instructor has sent me a very clear message as per which one should be the priority.]

    Similarly, when class participation is only worth 15% of a student’s grade, it’s much more difficult for instructors to justify taking the time to provide such regular feedback as is being provided here. Boosting the percentage of the final grade allocated to class participation (perhaps even significantly) could potentially therefore address both challenges at once.

    As for annual and regular/predictable the drop-off come essay time, doesn’t that simply send a message that the demands the course is placing on the student body in terms of quantity of load are unrealistic? Cutting content that few ever absorb doesn’t actually result in much of a net loss in terms of student knowledge / understanding. Why not use second term tutorials in a full-year course to work-shop those essays that are worth so much of the grade, or to break down course themes as one would on a final exam? Ironically, this approach might even increase the total amount that students read, as the workshops might inspire them to pursue additional sources that relate to a topic that genuinely interests them.

    Many historians have been taught, however inadvertently, that content and coverage is critical to good teaching. The research is pretty clear that this professor-centric approach is not helpful if the goal is student learning and understanding. Privileging depth over breadth generally results in greater long-term recall and stronger feelings of ownership of the material.

  2. Daniel Ross

    Great post, a lot of good ideas here! This term I’ve used the 3-2-1 format for my 2nd-year students’ assessments: identify the 3 most important aspects of the reading; identify 2 weaknesses or areas that were difficult to understand; pose 1 question that will stimulate discussion in class. I’ve made it mandatory to do five of these during the term, when the students choose. They submit them electronically through Moodle the morning of the class, and I often incorporate their questions from the 1 part into the discussion. In total, participation & the assessments are worth 35% of their mark. This matches the fact that we spend one hour discussing for two hours of lecture.

  3. David Calverley

    An interesting article and I think the authors have put some good thought into how they can improve their discussions. I think most professors and teachers have struggled with class discussions that are lackluster. I agree with what the other two commentators said. They are great observations and suggestions. I’ve been using Harkness Discussions (modified somewhat) for the last ten years, and more standard seminars for about 15 years before that.

    There are some other issues to think of. First, most high school students leave grade 12 having never read an academic article (the sort of article that is standard in most readers for first-year classes). If they haven’t been prepared to read a 15-25 page academic article, they are likely overwhelmed with the structure and nature of academic writing. I start my students in Grade 11 reading longer, more involved articles and book chapters (not just textbook chapters – which are generally written at a low reading level). I teach them how to identify a thesis, how to find main points of evidence, why checking footnotes is important, etc. It takes time. However,reading academic writing is a skill that has to be taught.

    It’s always worthwhile to think about the readings you are using. Is the reading something the students will find interesting? This might elicit some groans, but what the prof finds engaging might leave the students cold. Is the content of the article the important thing in your seminars or is the critical engagement and thinking the important thing. If its the latter, find readings more students will find engaging and interesting but which still push them to read and think critically. In my experience, two readings that take opposing view points on the same issue work best. It helps students understand how arguments are formulated and how evidence can be interpreted differently.

    Do you prep students before assigning a reading? Do they have some background in the material? Have you told them a bit about the article and/or how the article fits into larger debates/discussions about the topic?

    It’s also important to consider that some students are simply shy. I have students this year who are brilliant but rarely speak in discussions. To compensate for this, they are allowed to send me follow-up emails within a few hours of a discussion ending. Their emails must address the article, two points made in the discussion, and their opinions about the reading. I think it’s important to recognize that students are all different, and not every student is comfortable speaking out. Daniel’s comment about pre-submission of questions and observations is similar – it lets students get involved through a different route.

    You might want to look into the Harkness Discussion method that was developed years ago at Phillips Exeter. I find it very useful and it is easy to modify to suit your own teaching or classes. It helps that all the English teachers at my school use it as do the History teachers. Students know what it is and what is expected of them. Students also have to annotate their readings (another skill that needs to be taught) and submit them as part of their grade. I take notes on each student as they talk. I email each student their particular comments at the end of class so they can see how they did. If they didn’t talk or send a follow-up email they get a zero. If their annotations were weak they get a weak grade.

    I’m more than happy to communicate with the authors further if they want to contact me. I have no problem with the ActiveHistory editors forwarding my email on to you as regards this posting.

  4. Mary-Ann Shantz

    I enjoyed reading about the various strategies you’ve tried and your sense of what’s worked well and what hasn’t worked as well. I’ve embraced a similar approach and reached similar conclusions in my teaching at the 100 and 200-level. Teaching at a different sort of post-secondary institution, MacEwan University in Edmonton, I have slightly different constraints: my courses are capped at 40 students but there are no tutorials scheduled (nor TAs).

    I have adapted a modified tutorial format from my colleagues and settled on the following approach, which students often indicate to me are the most engaging and significant learning experiences in the course. Four times over the course of the semester I divide the class into two groups and assign a secondary source article for them to read. They are required to prepare a summary of the article in advance, to be submitted at the start of the class in which the article will be discussed. There is no participation mark connected with the assignment, but they must be in attendance at the discussion in order for their summary to be marked. The emphasis is on summarizing (ie. understanding what the historian is arguing/doing in the article), rather than critiquing it. I emphasize reading carefully for understanding and synthesizing key arguments as the main focus of the assignment. I provide the following questions to guide students in preparing their summaries:

    • What is the thesis or main argument presented in the article? How does the historian develop his/her argument throughout the article?

    • What evidence does it rely on? In other words, what types of primary sources does the historian use? Whose perspectives or experiences are conveyed in the primary source material?

    • Why did the historian write this article? Does the historian seek to correct historical misconceptions or oversights?

    • What does this article contribute to your understanding of this topic/time period?

    There is a lot of overlap in the types of questions you and Daniel and I are asking. I really emphasize to students that they need to read the article in conjunction with the footnotes and think about whose perspectives/voices are represented in the primary source material. In my classes these summaries are weighted at 5% each.

    I’ve experimented a fair bit with the discussion format, and what I find to be working really well currently is small group discussion. I break the class into groups of 4-5 students (mixing the groups up over the semester to encourage students to get to know each other) and have them discuss a series of questions I put up on powerpoint. I usually put up one to two questions and give them 5-7 minutes to discuss before moving on to the next set of questions. I emphasize at the start of class that I want their conversation to develop organically, and that I will periodically be changing the questions but I will not interrupt their conversations. I also convey that I am not marking them on participation, but that I will be floating around the classroom, sitting in with each group at some point, but not directing the conversations.

    I have found that having students write the summary ahead of time, motivating them to do it by attaching marks, but then taking the pressure off in terms of participation marks has translated into really engaged and enjoyable small group discussions. In this format, I find that probably 90% students actively participate, and they are almost always on topic. I agree with the above comment that choosing engaging readings is really important, and is one of my favourite parts of putting my courses together.

  5. Tom Peace

    Thanks for your constructive feedback everyone. I don’t have a ton of time tonight, but I want to chime in a bit in response to some of the comments. They are all very helpful. Thank you.

    Adam’s comments, though, about the numerical value ascribed to these assessments presents a particularly thorny issue. I agree with him about their overall weighting but not in terms of how the course is structured. Here’s the breakdown in our course (the reading assessments are part of the participation grade):

    The Value of History 5% Sept 28
    Primary Source Analysis 15% Nov 9
    In-class Essays 15% (5% each)
    Research Essay Proposal 5% Jan. 30
    Research Essay 20% Mar. 8
    Tutorial Participation 15%
    Tutorial Presentation 5%
    Final Exam 20%

    We’ve opted to go for a lot of smaller forms of assessment (three in-class essays and three essays prepared in advance) in order to provide first year students with the opportunity to learn from their mistakes. The critical reading assessments form the building blocks from which students respond to these (slightly) larger assignments. By the time our students are writing their major paper (due a month before the course ends) or write the final exam, they have essentially had two rounds of substantive feedback on these assignments, in addition to our weekly feedback on the critical reading assessments. The purpose here, as David indicates, is to provide a bridge between our students’ diverse experiences of high school and to adequately prepare them for the university context in which they will be spending the next four years.

    I write this because I think Adam is right about our need to think about weighting and timing. I’m curious to hear what people think about frequent low stakes assignments and also how they are placed within the course; or reciprocally, if folks find larger assignments (with substantive class time dedicated to their development) more valuable. I might add, in addition to timing regular work, like these critical reading assessments, around the higher value assignments in the course, we are also faced with the broader culture of the institution, created by colleagues teaching in different units with different expectations. This also plays a role in shaping student engagement.

    One last word about annotation in response to David. I haven’t integrated this well into my teaching, but am really interested to hear about how others have done so. This year I’ve started to play around with hypothes.is, a relatively intuitive and open source annotation tool. I’m really curious to hear about how colleagues have developed assessment tools around this.

  6. Joshua Randall

    When I was doing my degree in History as a mature student I absolutely hated these sort of activities. It felt demeaning in a way and in another distracted from the actual act of reading. I understand the intent and goals behind implementing a critical reading form. However,the problem is that I feel this approach often has the opposite effect of which is intended. It feels like death from a thousand paper cuts from the book of redundancy.

  7. andreaeidingerAndrea

    I just wanted to chime in to say two quick things! As someone who’s now taught at 5 different institutions, I can tell you that some have restrictions on the amount allocated to participation for the final grade. In upper levels, it tends to be capped at 20%, and at lower levels, it tends to be capped at 15%. Depending on the flexibility of your chair, this may or may not include relevant written assignments. Added to that, only two of the institutions i’ve worked for have specifically allocated tutorial time. So generally speaking, what I do is divide what is (usually) a three-hour lecture into two parts. In the first, I do a regular lecture. Then a break. Then I do discussion groups around the readings. I provide discussion questions ahead of time. The class (usually 35 to 50 students), breaks into groups of 3 to 5 people. They deal with the questions, and I circulate, spending about 5 to 10 mins with each group. Each discussion session is worth 5 points for each student. Students get 2 points for showing up, and additional points for contributions. Then, one of the main sections of the exam is specifically dedicated to the readings. I’ve tried lots of different strategies, and this seems to work the best.

    I will say that one consideration for me has been the issue of time. As a sessional instructor, I am not paid for any course or class prep, nor am I paid for marking. Sessional contracts usually only cover actual face-time (but not office hours). So I try to limit the amount of work that I do outside of class. So while I love this reading assignment, realistically, it’s not feasible for me.

  8. Bruce Douville

    Thank you for sharing your insights on this method of encouraging student engagement with the readings. I’m wondering if you have encountered students taking the easy way out, and simply copying their answers from another student? It would seem to me that some vigilance would be required to identify and discourage this practice. Your thoughts?

  9. Liz Tobin

    A somewhat “tongue in cheek” suggestion for engaging students in critical reading…..

    As a non-academic I find critical reading often exhausting because to establish the credibility of the author’s stance I must check the sources. I’ve often had to focus on simply on one author because his/her sources have proved the most reliable.

    I have the advantage that my preferred venue can be my dining room table, and a short “dip” into a book while enjoying some home cooking! I’m currently making my way through Bill Bryson’s “At Home: A Short History of Private Life”. He provides a PDF of 115 pages of source notes, and my paperback version has a handy alphabetical list of the authors as well.

    Food for thought perhaps?

  10. Geoff Read

    Sorry – I just have time for very quick response. Thanks to all – great food for thought. To answer Bruce’s question: yes, we have encountered that problem. My response to it was to give the students who’d copied each other 1/2 the marks and to warn them sternly not to do it again. That worked. But yes, it does require some vigilance. And obviously the bigger the class gets the more difficult that vigilance is to maintain.

  11. Lindsay Gibson

    Thank you for the thoughtful article about confronting an issue that many post-secondary instructors regularly encounter; how to nurture students’ ability to critically read academic articles prior to class, and how to assess students’ ability to critically read academic articles in history.

    One of my issues with the History 1801E: Critical Reading Assessment is that it is somewhat generic. In other words, that the questions asked on this sheet are not unique to the discipline of history and could be used in other arts/humanities courses to analyze their readings.

    If we are serious about the claim that the discipline of history teaches specific historical thinking competencies and dispositions, then our teaching methods, in class activities, and assessments should focus on the various historical thinking concepts and dispositions needed to think historically. How can we assess by the end of a course that our students have developed the dispositions and historical thinking understandings we are setting out to teach?

    Teaching students to understand, interpret, and analyze secondary accounts (or what Chapman refers to as historical interpretations) is not as simple as it seems. As Calverley points out in one of the comments, many of the first and second year undergraduate history students have never encountered an historical interpretation as written in an academic journal, and it would be unfair to expect that it will be easy to work with them.

    Arthur Chapman, a history educator at University College London’s Institute of Education has written numerous articles and conducted systematic research on many aspects related to historical interpretations (secondary accounts) including students’ understandings. I strongly suggest that becoming acquainted with his work would be of great value to anyone teaching undergraduate and graduate history students.

    His 83-page mini-book for Edexcel A Level in History is particularly helpful (A Levels in the UK are rigorous grade 12 courses that prepare students for the demands of university education). The mini-book focuses on explaining what historical interpretations are, what students should learn about historical interpretations, what challenges students encounter when learning about historical interpretations (as indicated by research), and what activities and strategies will help students develop sophisticated understanding of historical interpretations. Activities focus on helping students understand what a historical interpretation is, how to comprehend and analyze historical interpretations, how to explain why historians arrive at differing interpretations, and how to evaluate the quality (and plausibility) of historical interpretations. https://www.researchgate.net/profile/Arthur_Chapman3/publication/312300440_Developing_Students%27_Understanding_of_Historical_Interpretation/links/5878f07108ae9a860fe2a835/Developing-Students-Understanding-of-Historical-Interpretation.pdf

    https://www.academia.edu/23167291/Chapman_A._2016_Historical_Interpretations

    I feel like Chapman’s work might help your thinking about what it means to be able to read and critically analyze historians’ interpretations in a way that is specific and particular to the discipline of history.

    One other thing I’d like to add is that it might be helpful to link your weekly tutorial readings to the final exam or other major course assessments. For example, use the tutorials to help students understand how to read, understand, analyze, and critique historians’ interpretations using justifiable criteria, and provide specific and descriptive formative feedback to students throughout the course so that they are improving in this aspect. On major course assignments or on exams you might want to design activities/questions/tasks that require students to mobilize this understanding and read, understand, analyze, and critique historians’ interpretations. You could get them to read an entire article for an exam or provide excerpts from historians’ accounts and get them to analyze/critique them or explain why historians arrived at such different interpretations despite utilizing similar evidence.

Please note: ActiveHistory.ca encourages comment and constructive discussion of our articles. We reserve the right to delete comments submitted under aliases, or that contain spam, harassment, or attacks on an individual.