I taught high school students for a decade and a half before my current university career. I obtained my B.Ed. in the early 1990s, at the height of K-12 educators’ interest in constructivism and alternative assessment. The phrase “alternative assessment” was eventually replaced by “authentic assessment” and finally the term became simply “assessment” (at least at the K-12 level). The change in terminology reflected a change in understanding: alternatives to traditional paper-and-pencil testing should not be considered “alternatives” but as central methods of assessing students. Those methods should be “authentic” in that they reflect actual real-world (i.e., outside of school) tasks, and should require the demonstration or performance of skills. As these ideas increasingly became the norm among secondary school teachers, the adjectives “alternative” and “authentic” fell away.
And so when I taught high school chemistry, I replaced the final paper-and-pen examination that required calculations and recall of memorized facts with a final multi-day unstructured lab activity. In my grade eleven courses, students were given a list of 20-30 chemicals, and then provided an unlabeled sample of one of them. They were required to research the physical and chemical properties of the list of chemicals, perform appropriate tests of their own choosing on their unknown sample, and thereby determine its identity. In so doing, they demonstrated their ability to research, experiment, and draw conclusions. My grade twelve students were given a hydrated salt whose identity they had to determine by evaporating away its water content. They, too, were required to generate their own lab process.
Yet when I began teaching university history students, I reverted to tests and final exams. When I found myself in April grading not only an end-of-term research essay but also three essays from the exam each student had written, I realized something had to change. I did not need four essays at the end of the year to determine whether students had acquired the skills the course was designed to teach them. Nor was there much value in my writing comments and offering suggestions for improvement on exams that would not be returned, or on final essays that most students would choose not to pick up.
So I have stopped giving exams in my university History courses.
I’m not alone in thinking this way. Sociologist David Jaffee, in a Chronicle of Higher Education article provocatively titled “Stop Telling Students to Study for Exams,” makes the following criticisms of university exams:
While faculty consistently complain about instrumentalism, our behavior and the entire system encourages and facilitates it…. This dysfunctional system reaches its zenith with the cumulative ‘final’ exam. We even go so far as to commemorate this sacred academic ritual by setting aside a specially designated ‘exam week’ at the end of each term. This collective exercise in sadism encourages students to cram everything that they think they need to ‘know’ (temporarily for the exam) into their brains, deprive themselves of sleep and leisure activities, complete (or more likely finally start) term papers, and memorize mounds of information. While this traditional exercise might prepare students for the inevitable bouts of unpleasantness they will face as working adults, its value as a learning process is dubious.
Other scholars have critiqued exams as well. Alfie Kohn suggests that exams largely encourage memorization and thereby promote cheating. A meta-analysis of 250 studies of assessment and learning by Black and Wiliam concluded that “intentional use of assessment in the classroom promotes learning” and that effective assessment requires “determining students’ pre-existing beliefs and knowledge, teaching to challenge and extend students’ beliefs and knowledge, and encouraging student metacognition.” John Hattie’s synthesis of more than 800 meta-analyses of evaluation in education reveals that testing is “only effective if there is feedback from the tests to teachers such that they modify their instruction to attend to the strengths and gaps in student performance.”
I want to highlight two Manitoba Education and Training documents that are particularly useful for post-secondary educators interested in re-examining their methods of assessment. These are documents that I used in my previous career as a Manitoba high school history and chemistry teacher, and that I recommend regularly at the University of Winnipeg’s annual orientation for new Faculty of Arts hires. The first, Rethinking Classroom Assessment with Purpose in Mind, was released in 2006. The second, Success for All Learners: A Handbook on Differentiating Instruction, is a Manitoba Education support document released in 1996.
Rethinking Classroom Assessment with Purpose in Mind explains that there are 3 kinds of assessment: for, as, and of learning:
Assessment for learning: instructors gain insight to plan further instruction; students receive helpful feedback.
Assessment as learning: student meta-cognition and personal responsibility.
Assessment of learning: achievement at a point in time.
The latter, assessment of learning, typically has received the greatest emphasis (through, for example, tests and examinations), but should receive the least: “The ultimate goal of assessment is to help develop independent, life-long learners who regularly monitor and assess their own progress.”
How does a university professor achieve this goal?
We need to emphasize formative assessment (assessment for and as learning) rather than high-stakes summative assessment (assessment of learning). Doing so encourages student self-reflection, which is critical to a constructivist approach to teaching. We need to avoid a mismatch between our curriculum content and our assessment practices, and emphasize assessment for learning over assessment of learning. Good assessment, then, incorporates scaffolding and is differentiated.
In a traditional approach to education, educational psychologist Jon Mueller observes, course planning begins with content delivery; assessment is relegated to the end-stage of planning, and focuses on content acquisition. Proper assessment, he argues, involves backwards design: “teachers first determine the tasks that students will perform to demonstrate their mastery, and then a curriculum is developed that will enable students to perform those tasks well, which would include the acquisition of essential knowledge and skills.” (Manitoba Education provides a useful template for this backwards design process.)
Methods for formative assessment (assessment for and as learning) are provided in Success for All Learners: A Handbook on Differentiating Instruction. This collection of activities, graphic organizers, and templates – many of which are easily modified for the post-secondary classroom – is available from the Manitoba Text Book Bureau. Since its publication, similar sources have been made available for free online: see, for example, Portage La Prairie School Division’s Differentiated Instruction Strategies for Teaching & Learning, as well as the following:
- Centre for the Study of Historical Consciousness‘s Big Six Historical Thinking Concept Templates
- Active Listening
- KWL (know, want to know, learned)
- LINK (list, inquire, note, know)
- Concept Map
- Fact- or Issue-Based Article Analysis
- Gallery Walk
- Graffiti Boards
- SQ3R (survey, question, read, recite, review)
- Academic Controversy
Expanding lines of communication between university Faculties of Education, provincial departments of education, K-12 teachers, and university professors serves to improve post-secondary education. It also saves post-secondary instructors from unnecessarily reinventing the pedagogical wheel! Moving away from an emphasis on summative assessment to formative assessment in the university classroom can be daunting – but less so when we are aware of the many K-12 resources available to help with that transition.
Janis Thiessen is an Associate Professor of History at the University of Winnipeg and a past recipient of the Faculty of Arts Excellence in Undergraduate Teaching Award. She is the author of NOT Talking Union (MQUP 2016) and Snacks (UManitoba Press 2017).
 Cited in Manitoba Education, Citizenship and Youth (MECY), “Rethinking Classroom Assessment with Purpose in Mind” (2006), 5
 John Hattie, Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement (Oxford: Routledge, 2009), 178. Hattie’s synthesis determines that the top 10 influences on student achievement are: self-reporting of grades; Piagetian programs (teaching geared to students’ level of cognitive development); formative evaluation (from teacher to student and vice versa); micro-teaching; acceleration; classroom behaviour; comprehensive interventions for learning disabled students; teacher clarity; reciprocal teaching (students use cognitive strategies such as summarizing, questioning, clarifying, and predicting); and feedback.
 Western and Northern Canadian Protocol for Collaboration in Education, Rethinking Classroom Assessment with Purpose in Mind (Governments of Alberta, British Columbia, Manitoba, Northwest Territories, Nunavut, Saskatchewan, and Yukon Territory as represented by their Ministers of Education, 2006), viii.
 See also Grant Wiggins and Jay McTighe, Understanding by Design (Association for Supervision and Curriculum Development, 2005).
This post is part of the ongoing Beyond the Lecture: Innovations in Teaching Canadian History series edited by Andrea Eidinger and Krista McCracken. Inquiries, proposals, and submissions can be sent to the editors via unwrittenhistories [at]gmail[dot]com.