You have /5 articles left.
Sign up for a free account or log in.

WSCUC

GARDEN GROVE, Calif. -- Ask the many assessment haters in higher education who is most to blame for what they perceive as the fixation on trying to measure student learning outcomes, and they are likely to put accreditors at the top of the list.

Which is why it was so unexpected last week to hear a group of experts on student learning tell attendees at a regional accreditor's conference here that most assessment activity to date has been a "hot mess" and that efforts to "measure" how much students learn should be used help individual students and improve the quality of instruction, not to judge the performance of colleges and universities.

The session took place at the Academic Resource Conference, the annual gathering of the WASC Senior College and University Commission, which accredits institutions in California, Hawaii and the Pacific Islands. The panel's title built off the conference's theme of "provocative questions and courageous answers," and asked, in regard to teaching, learning and assessment, "is higher education accomplishing what it said it would?"

Not surprisingly, given such a broadly framed question, the conversation that unfolded was wide ranging and, at times, scattershot. But at its core, the discussion revolved largely around whether the way most colleges currently have gone about trying to judge whether their students are learning (by defining student learning outcomes and finding some way to gauge whether they have achieved those goals) helps institutions (and helps higher education collectively) prove they are doing a good job.

The answers were pretty uniformly no, despite all the activity colleges have engaged in during the last decade.

"There's a paradox that puzzles me and should puzzle all of us," said John Etchemendy, former provost at Stanford University, who is also a commissioner of the Western accrediting commission and a member of the federal panel that advises the U.S. education secretary on accreditation. The evidence is unequivocal, he said, that "the answer to the question on the screen -- is higher education accomplishing what it said it would? -- is absolutely yes," based on how much more college-goers earn over their lifetimes than Americans without a degree, among other indicators.

But "whenever we try to directly measure what students have learned, what they have gotten out of their education," Etchemendy continued, "the effect is tiny, if any. We can see the overall effects, but we cannot show directly what it is, how it is that we’re changing the kids."

Part of the problem, said Natasha Jankowski, director of the National Institute of Learning Outcomes Assessment, is defining what assessment is and what it isn't -- or, more precisely, differentiating between different kinds of assessment: that used for individual and institutional improvement and that used for external accountability purposes.

"There is assessment about informing my teaching" and students' learning -- understanding how students respond to or gain from certain kinds of content or instructional approaches, and developing evidence "that I would need to see to make a change in how I teach something," she said.

"That's very different from 'have we [in higher education generally] been effective over time?'" Jankowski said. The latter requires marshaling "a variety of evidence" of performance on numerous fronts (economic as well as educational) to a range of audiences (politicians, accreditors, students and parents, employers, the public), and "one test or measure [of student learning] isn't going to help us in that space." (A 2007 essay in Inside Higher Ed, "Assessment for 'Us' and Assessment for 'Them'" captured this conundrum well.)

Much of the assessment work in the last decade has focused on trying to develop quantifiable proof that institutions are helping their students, collectively, learn, with the aim of being able to create a measure of educational quality that was comparable across institutions. This push was often driven by accreditors' pressure on colleges, which was driven in turn by federal government pressure on accreditors. (One participant in the Western accreditor's panel, Jose F. Moreno, an associate professor of Latino education and policy studies at California State University at Long Beach, shared that when institutions like his were awaiting visits from the accreditor, they would often say "the WASC-itos were coming," a belittling reference to hordes of regulators about to descend.)

That perception of accreditors is why it is noteworthy that the Western accreditor, under its new president, Jamie Studley, staged a conversation that asked hard questions about assessment.

“We chose the theme Provocative Questions, Courageous Answers to underscore that WSCUC is committed to the same self-reflection and continuous improvement we expect of our institutions," Studley said. "When done well, assessment is a powerful tool that supports student success. Assessment has certainly evolved from its earliest days, and it’s our responsibility as an accreditor to encourage its wise application in the context of effective oversight and improvement focused on equity and important outcomes for all students.”

Achieving that goal would require moving beyond what Jankowski called "assessment as bureaucratic machine," which often resulted in institutions slapping together ill-conceived efforts to try to measure something to prove they were doing so.

"At a lot of places," Jankowski said, "it was, 'You need some learning outcomes -- put something together.' 'What are learning outcomes?' 'I don't care. Just fill this out.'

"It's not just that faculty members are crabby and hate change … There are good reasons why faculty hate it. It's real and it's earned," Jankowski said. (An Inside Higher Ed survey of faculty members last year, for instance, found that 59 percent of respondents agreed that assessment efforts "seem primarily focused on satisfying outside groups such as accreditors or politicians," rather than serving students.) Essays like this also reflect faculty disdain.

It's time for those in the assessment field to "own up to the fact that everyone had a first-round 'hot mess' go of it," she said. "We had a round of assessment that was really detrimental, incredibly measurement focused."

What Might Round 2 Look Like?

No one on the panel was arguing that teaching and learning are unimportant or that college officials and faculties shouldn't be regularly analyzing how well both things are happening in their classrooms -- far from it.

But "we need to worry less about the architectonic of how assessment works," Etchemendy said, and more about periodically checking "whether we’re teaching what we’re trying to achieve, and is the design still a good design, or maybe times have changed.

"If we discover that our class is not working or that our students are not getting what we want them to get out of the class, then I would think we would all try to change it. Those are the good parts of assessment, and I think anybody can buy in to that."

If efforts to measure student learning in a quantifiable way have been counterproductive, what should constructive assessment look like?

It should start, Jankowski and others said, with understanding what an institution (or an instructor, at the granular level) wants students to know and be able to do.

Sharon B. Hamill, a professor of developmental psychology and faculty director of the Institute for Palliative Care at California State University at San Marcos, suggested a form of "backward design," focused on "where do I want them to end up, and then how do I help them get there," she said. "Think to yourself, 'if they don’t remember another thing, they’ll remember this.'"

Robert Shireman, a senior fellow at the Century Foundation and former Obama administration official who has railed against what he calls the "inane" focus on student learning outcomes, attended the Western accreditor's session and later led another called Improving Assessment by Putting a Leash on the Dogma. He said institutions should focus on making sure students are persisting in their academic programs and understanding what's impeding those who don't.

Focusing on outcomes like that don't necessarily capture the amount or quality of the learning, since institutions have been known to let students continue through their programs without demanding much in return.

The best way to gauge that, Shireman said, is to do "random checks of artifacts of the teaching and learning process (student work, instructor feedback, etc.). Ideally, portfolios of student work, not cherry-picked, would be available for public review (or at least external peer review). This should be arranged by the school but checked by accreditors." Such an approach would be designed, he said, to protect against diploma mills or other lesser-quality institutions.

But how might one go about answering the question that the Western accreditor's session started with: "Is higher education accomplishing what it said it would?" If it's not with assessment of student learning outcomes at the course or institutional level, it should be with "external, objective measures that measure indirectly program and institutional success -- things that can’t be fudged," Etchemendy said.

"Whether they graduate; whether they manage high-, well-paying jobs 10 to 15 years out, are they repaying their loans, what do they think about their institutions?" he said. "Those are the things I’m really interested in measuring."

Next Story

Written By

More from Learning & Assessment