You have /5 articles left.
Sign up for a free account or log in.

If there's a domain where college faculty could most stand to learn from K-12 educators, it is assessment. 

Too often, assessment is treated in postsecondary education as synonymous with grading: as a matter of ranking students against a rubric or their peers on the basis of a quiz, a problem set, a multiple choice or essay exam, or a report or research paper.

But as any K-12 educator could explain, assessment can serve many other, even more important, purposes, each of which demands very different kinds of assessment exercises.

What are the functions that assessment can serve? Here are nine:

  • To establish a reference point
  • To diagnose a student’s needs and strengths
  • To individualize instructional pathways
  • To motivate student learning
  • To prompt metacognition (to encourage students to monitor and reflection on their learning)
  • To measure progress
  • To provide feedback and guidance
  • To establish accountability
  • To evaluate a student’s performance against a rubric or against other students

Typically, assessments are classified through rather crude dichotomies. There are formative assessments and summative assessments -- assessments during the instructional process, designed to improve teaching, and those at the end of a unit, to evaluate student learning.

Assessments can be traditional or authentic – that is, those that are academic and those that involves the skills and knowledge needed to perform a “real world” task. 

Assessments can also be informal or formal -- that is the difference between observations made during normal classroom interactions and the responses an instructor receives from tests of knowledge and skills.

In addition, there is the distinction between norm-referenced and criterion-referenced assessments – between those that compare a student’s performance against other students in a single class and those that measure performance against a specific standard.

If an assessment is to be meaningful and fair, it must be “valid” and “reliable” – that is, the results must be accurate and consistent. A proper assessment must measure the knowledge or skills it is supposed to evaluate, without results distorted by extraneous factors. Also, the results be roughly equivalent regardless of when the assessment is administered.

Many assessments are anything but valid and reliable. Recent research has demonstrated that in introductory college-level science classes, female students, on average, tend to underperform on high stakes multiple-choice exams and perform better on other forms of assessment – a difference that is generally attributed to “stereotype threat,”the test-induced anxiety that their performance might confirm negative stereotypes. 

As higher education moves toward a more outcome-driven approach that emphasizes mastery rather than seat time, assessment become more and more intrinsic to teaching and learning. The challenge is to embed assessments in every stage of the learning process in ways that empower students and their instructors. Embedded assessments give students the opportunity to critically reflect on their learning and provide instructors with the information needed to assess students’ progress toward learning objectives and to adjust teaching methods to remedy weaknesses or confusions.

Typically, assessments correspond to a particular dimension of Bloom’s Taxonomy, the classification of thinking skills developed by educational psychologist Benjamin Bloom in 1956, which distinguishes between recall, comprehension, application, analysis, synthesis, and evaluation.

Fill-in-the-blank, matching, labeling, or multiple-choice questions typically require students to recall or identify terms, facts, or concepts.

Short- or long answer questions or problem sets generally test comprehension or application skills. These assessments typically require students to solve problems, summarize, classify, compare or contrast, or interpret information, draw inferences, and present an explanation.

In contrast, classroom discussions, debates, and case studies tend to emphasize analysis, synthesis, and evaluation. Students are typically asked to offer a critique or assessment, identify bias, present a judgment, or advance a novel interpretation.

Performance-based assessment offers a valuable alternative (or supplement) to the standard forms of student evaluation. Performance-based assessment requires students to solve a real-world problem or to create perform, or produce something with real-world application. It allows an instructor to assess how well students are able to use essential skills and knowledge, think critically and analytically, or develop a project.  It also offers a measure of the depth and breadth of a student’s proficiencies.

Performance-based assessment can, in certain instances, simply be an example of what Bloom’s Taxonomy calls application. Thus, a student or a team might be asked to apply knowledge and skills to a particular task or problem. 

But performance-based assessment can move beyond Bloom’s Taxonomy when students are engaged in a project that requires them to display creativity and that results in an outcome, project, or performance that is genuinely new. The more sophisticated performance assessments involve research, planning, design, development, implementation, presentation, and, in the case of team-based projects, collaboration.  

If performance-based assessments are to be fair, valid, and reliable, it is essential that there is an explicit rubric that lays out the criteria for evaluation in advance. It is also helpful to ask students to keep a log or journal to document the project’s development and record their reflections on the developmental process.

The most commonly used assessments – the midterm and final or the term paper – have an unpleasant consequence. Reliance on a small number of high stakes assessments encourages too many students to coast through the semester and to pull all-nighters when their grade is on the line. This may inadvertently encourage a party culture.

In stark contrast, performance-based assessment offers a way to ensure that evaluation is truly a learning experience, one that engages students and that measures the full range of their knowledge and proficiencies.

Steven Mintz is Executive Director of the University of Texas System's Institute for Transformational Learning and Professor of History at the University of Texas at Austin. Harvard University Press will publish his latest book, The Prime of Life: A History of Modern Adulthood, next month.

Next Story

More from Higher Ed Gamma