You have /5 articles left.
Sign up for a free account or log in.

I know what you’re thinking: Why is a poet writing about assessment in higher education? Honestly, I wonder that myself. One day, when assessment came up in conversation, I commented that it could be useful to programs as they make curricular decisions. Within 48 hours, the dean placed me on the institution’s assessment committee. Suddenly, assessment is a hot topic and, of all people, I have some expertise.

My years on that committee convinced me that we must pay attention to the rise of assessment because it is required for accreditation, because demands have increased significantly, and because it might be useful in our professional lives. Accrediting bodies are rightly trying to stave off the No Child Left Behind accountability that the Spellings Commission proposes. Maybe the incoming secretary of education will consider how we might be better -- not more -- accountable. Perhaps, too, Wall Street should be held accountable before the Ivory Tower. But assessment for higher education will likely become more pressing in a weak economy.

One tool to which many institutions have turned is the National Survey of Student Engagement (NSSE, pronounced Nessie). NSSE was piloted in 1999 in approximately 70 institutions, and more institutions participate each year. This survey appeals especially to college and university presidents and trustees, perhaps because it’s one-stop, fixed-price assessment shopping. NSSE presents itself as an outside -- seemingly objective -- tool to glean inside information. Even more appealing, it provides feedback on a wide array of institutional issues, from course assignments to interpersonal relationships, in one well-organized document. Additionally, the report places an institution in a context, so that a college can compare itself both with its previous performance and with other colleges generally or those that share characteristics. And it doesn’t require extra work from faculty. NSSE seems a great answer.

Yet, NSSE does not directly measure student learning; the survey tracks students’ perceptions or satisfaction, not performance. Moreover, respondents appraise their perceptions very quickly. In the 2007 NSSE, students were informed, “Filling out the questionnaire takes about 15 minutes” to complete 28 pages, some of which included seven items to rate. So, as with its Scottish homonym, NSSE presents a snapshot of indicators, not the beast itself.

Importantly, NSSE is voluntary. A college or university can participate annually, intermittently, or never. If a college performs poorly, why would that college continue? If a university uses the report to, as they say in assessment lingo, close the loop, wouldn’t that university stagger participation to measure long-term improvements? Over its 10-year existence, more than 1,200 schools have participated in NSSE, and participation has increased every year, but only 774 schools were involved in 2008, which suggests intermittent use. In addition, some institutions use the paper version, while others use the Web version; each mode involves a different sample size based on total institutional enrollment. NSSE determines sample size and randomly selects respondents from the population file of first-years and seniors that an institution submits.

Perhaps, all these factors lead NSSE to make the following statement on its Web site: "Most year-to-year changes in benchmark scores are likely attributable to subtle changes in the characteristics of an institution’s respondents or are simply random fluctuations and should not be used to judge the effectiveness of the institution. The assessment of whether or not benchmark scores are increasing is best done over several years. If specific efforts were taken on a campus in a given year to increase student-faculty interaction, for example, then changes in a benchmark score can be an assessment of the effectiveness of those efforts."

This statement seems to claim that an increase in a score from one year to the next is random unless the institution was intentionally striving to improve, in which case, kudos. Yet, NSSE encourages parents to “interpret the results of the survey as standards for comparing how effectively colleges are contributing to learning” in five benchmark areas, including how academically challenging the institution is.

I have larger concerns, however, about assessment tools like NSSE, which are used for sociological research on human subjects. The humanities and arts are asked to use a methodology in which we have not been trained and for which our disciplines might not be an appropriate fit. NSSE is just one example of current practices that employ outcomes-based sociological research, rubric-dominated methodology, and other approaches unfamiliar in many disciplines.

Such assessment announces , anyone can do it. I’ve seen drafts of outcomes and rubrics, and that’s not true. Programs like education and psychology develop well-honed, measurable outcomes and rubrics that break those outcomes down into discernable criteria. Programs in the sciences do a less effective job; some science faculty assert that the endeavor is invalid without a control group, while admitting that a control group that denies students the environment in which they most likely learn would be unethical.

Those of us in the arts and the humanities want wide, lofty outcomes; we resist listing criteria because we disagree, often slightly or semantically, about what’s most important; we fear omission; and we want contingencies in our rubrics to account for unexpected — individual, creative, original — possibilities. Writing and visual art cannot easily be teased apart and measured. Critical thinking and creative thinking are habits of mind. How can NSSE or rubrics capture such characteristics?

Moreover, by practicing social science, often without reading a single text about those methods, arts and humanities faculty diminish the discipline we poach as well as lessen the value and integrity of our conclusions. If we don’t know what we’re doing — how many of us really understand the difference between direct and indirect measures or between outcomes, objectives, goals, and competencies — the results are questionable. To pretend otherwise is to thumb our noses at our social science colleagues.

Further, this one-size-fits-all, cookie-cutter mentality ignores that different disciplines have different priorities. Included in Thomas A. Angelo and K. Patricia Cross’s Classroom Assessment Techniques is a table of top-priority teaching goals by discipline. Priorities for English are Writing skills, Think for oneself, and Analytic skills, in that order. Arts, Humanities, and English have just one goal in common: Think for oneself. We can survey student perceptions of their thinking — an indirect measure — or maybe we know independent thinking when we see it, but how do we determine thinking for oneself in a data set? These priorities aren’t even grammatically parallel, which may not matter to social scientists, but it matters to this poet!

Other priorities for Arts — Aesthetic appreciation and Creativity — and Humanities — Value of subject and Openness to ideas — are difficult, if not impossible, to measure directly. The priorities of Business and Sciences are more easily measured: Apply principles, Terms and facts, Problem solving, and Concepts and theories. So, a key issue is to determine whether the arts and humanities can develop ways to assess characteristics that aren’t really measurable by current assessment methodology or whether we must relinquish the desire to assess important characteristics, instead focusing on easily measured outcomes.

Another table in Classroom Assessment Techniques lists perceived teaching roles. Humanities, English, and Social Sciences see Higher-order thinking skills as our most essential role, whereas Business and Medicine view Jobs/careers as most essential, Science and Math rank Facts and principles most highly, and Arts see Student development as primary. Both knowledge of Facts and principles and job placement can be directly measured more easily than Student development. For English, all other roles pale in comparison to Higher-order thinking skills, which 47 percent of respondents rated most essential; the next most important teaching role is Student development at 19 percent. No other discipline is close to this wide a gap between its first- and second-ranked roles. Surely, that’s what we should assess. If each discipline has different values and also differently weighted values, do we not deserve a variety of assessment methodologies?

Lest I bash assessment altogether, I do advocate documenting what we do in the arts and humanities. Knowing what and how our students are learning can help us make wise curricular and pedagogical decisions. So, let’s see what we might glean from NSSE.

Here are items from the first page of the 2007 NSSE:

  • Asked questions in class or contributed to class discussions
  • Made a class presentation
  • Prepared two or more drafts of a paper or assignment before turning it in
  • Worked on a paper or project that required integrating ideas or information from various sources
  • Included diverse perspectives (different races, religions, genders, political beliefs, etc.) in class discussions or writing assignments

Students were asked to rate these and other items as Very often, Often, Sometimes, or Never, based on experience at that institution during the current year. These intellectual tasks are common in humanities courses.

In another section, students were questioned about the number of books they had been assigned and the number they had read that weren’t assigned. They also reported how many 20+-page papers they’d written, as well as how many of 5-19 pages and how many of fewer than five pages. We can quibble about these lengths, but, as an English professor, I agree with NSSE that putting their ideas into writing engages students and that longer papers allow for research that integrates texts, synthesizes ideas, and encourages application of concepts. And reading books is good, too.

Another relevant NSSE question is “To what extent has your experience at this institution contributed to your knowledge, skills, and personal development in the following areas?” Included in the areas rated are the following:

  • Acquiring a broad general education
  • Writing clearly and effectively
  • Speaking clearly and effectively
  • Thinking critically and analytically
  • Working effectively with others

The English curriculum contributes to these areas, and we are often blamed for perceived shortcomings here. While NSSE measures perceptions, not learning, this list offers a simple overview of some established values for higher education. If we are at a loss for learning outcomes or struggle to be clear and concise, we have existing expectations from NSSE that we could adapt as outcomes.

In fact, we can reap rewards both in assessment and in our classrooms when students become more aware of their learning. To do this, we need some common language — perhaps phrases like writing clearly and effectively or integrating ideas or information from various sources — to talk about our courses and assignments. Professional organizations, such as the Modern Language Association in English or the College Art Association in the visual arts, could take the lead. Indeed, this article is adapted from a paper delivered at an MLA convention session on assessment, and the Education Committee of CAA has a session entitled “Pedagogy Not Politics: Faculty-Driven Assessment Strategies and Tools” at their 2009 conference.

We needn’t reorganize our classes through meta-teaching. Using some student-learning lingo, however, helps students connect their efforts across texts, assignments, and courses. Increasingly, my students reveal, for instance, that they use the writerly reading they develop in my creative writing courses to improve their critical writing in other courses. I have not much altered my assignments, but I now talk about assignments, including the reflective essay in their portfolios, so that students understand the skills they hone through practice and what they’ve accomplished. Perhaps, I’m teaching to the test — to NSSE — because I attempt to shift student perceptions as well as the work they produce. But awareness makes for ambitious, engaged, thoughtful writers and readers.

Good teachers appraise their courses, adapt to new situations and information, and strive to improve. As Ken Bain points out in What the Best College Teachers Do, “a teacher should think about teaching (in a single session or an entire course) as a serious intellectual act, a kind of scholarship, a creation.” We are committed to teaching and learning, to developing appropriate programs and courses, and to expectations for student achievement that the Western Association of Colleges and Schools asks of us. We can’t reasonably fight the North Central Association of Colleges and Schools mandate: “The organization provides evidence of student learning and teaching effectiveness that demonstrates it is fulfilling its educational mission.” Assessment is about providing evidence of what we do and its effects on our students. Our task in the arts and humanities is to determine what concepts like evidence, effects, and student learning mean for us. If NSSE helps us achieve that on the individual, program, or institution levels, great. But NSSE is best used, not as an answer, but as one way to frame our questions.

Next Story

More from Views