You have /5 articles left.
Sign up for a free account or log in.

Accrediting officials have heard the message ad nauseum: Policy makers and the public need more evidence that colleges are educating their students, and it's up to higher education -- accreditors included -- to produce that evidence. The argument was made for the umpteenth time Tuesday at the annual meeting of the Council for Higher Education Accreditation, by a panel of higher education researchers and experts on assessment. The reaction from the audience of college officials and accreditors suggested that at this point, outright opposition seems to have morphed into resignation, and even partial embrace.

But it was equally clear that while they generally accept the idea that colleges must prove that they are educating their students, they have serious problems with the underlying premise that the only truly useful ways of measuring student learning outcomes are those that allow for comparing a college against its peers. Such measures often result in oversimplification and fail to account for differences between institutions, they argued.

The current push for accountability, which has intensified in the wake of the report last fall of the Secretary of Education's Commission on the Future of Higher Education, has embraced the notion that students and families, prospective employers and the public demand methods of comparing one college's performance against another.

The strongest proponent of that view at Tuesday's session at the CHEA annual conference in Washington was Margaret A. (Peg) Miller, director of the Center for the Study of Higher Education at the University of Virginia's Curry School of Education. Miller said her two decades of work trying to assess college performance -- at the State Council of Higher Education in Virginia and the National Forum on College-Level Learning -- had persuaded her that it was insufficient to gauge an individual college's success based only on information it provided about itself.

"There's no way to take a campus-based report and say, 'We're doing well or we're not doing well,'" Miller said. "The answer to the question, 'How well are we doing?' really depends on an answer to a prior question: Compared to what? Compared to whom?"

She noted that most faculty members and college leaders seem to have no problem using standardized tests to judge the quality of their student applicants, but that they have "been reluctant to use standardized measures to say something about the quality of their own work." The time in which higher education officials can respond to calls for accountability by ducking their heads and hoping the calls go away has past, Miller said, given the intensifying pressure and threats of government intervention.

"We have to pay attention to this message, because it has been consistent and it has been long term and it is getting louder," Miller said. "If we can ... look at ourselves carefully and rigorously, I think there's a very good chance that we will be able to control the terms in which this question is answered. If we can keep this question within our own control, we will do something that K-12 was unable to do, to everybody's great sadness."

Peter T. Ewell, vice president at the National Center for Higher Education Management Systems, had arguably the best line of the day (borrowed, he acknowledged) to describe the logic behind the push for comparability. Oftentimes, he said, institutions want to produce data that show that their students are improving over time. But is that enough, he asked? "I don't want to be flown by an airplane pilot who's better every flight," he said to laughs, implying that that doesn't do a whole lot of good if you don't know that the pilot stacks up well against his or her peers.

But Ewell and Jillian Kinzie, associate director of the National Survey of Student Engagement Institute, both expressed some misgivings about the push for comparability. Kinzie, whose survey is among the standardized measurements being promoted for possible inclusion in whatever accountability system (or systems) that might emerge from within higher education in the coming months and years, said she favored the idea that assessment is most valuable as a way of helping institutions improve themselves, rather than to compare one college’s performance against others for consumer purposes.

Ewell said he feared that the more that colleges (or associations or accreditors) focus on coming up with standardized ways to measure one institution’s performance against others, the less energy and inclination they’ll have to find other, perhaps better methods of assessing themselves for self-improvement purposes.

Similar Discussion, Different Context

As that conversation unfolded, a parallel discussion with similar themes took place in the next room at the CHEA meeting. Accreditors from the for-profit, career education sector positioned their more-than-a-decade-old emphasis on measurable outcomes as a model that the broader higher education world could learn from.

In a panel discussion, leaders of accrediting bodies working in the career education sector stressed that the focus on measurable outcomes -- and in particular, completion and placement rates -- has had a “transformative effect” on career-oriented institutions since the approval of more stringent accountability standards contained within the1992 Higher Education Act.

“Our sector was dragged into outcomes measurement kicking and screaming. No one wanted to do it, no one knew how to do it,” said Elise Scanlon, executive director of the Accrediting Commission of Career Schools and Colleges of Technology. Sound familiar?

But while the panelists all agreed that numbers don’t mean everything, they pointed to tangible improvements that the relatively new focus on numbers has inspired. Since the change in requirements, Scanlon said, institutions have stepped up their focus on completion and placement in a number of ways. These include becoming more responsive to student needs, concerns and grievances; expanding student services; implementing attendance procedures; involving employers with program development; pursuing articulation agreements; and paying closer attention to the markets dictating their graduates’ job opportunities.

“If completion is not the measure, maybe it’s something else. I think every sector of education should be thinking about what it is,” Scanlon said. “We’re not being honest with ourselves in higher education if we don’t say that there must be some benchmark that’s so low that it’s not acceptable.”

However, lessons from the career education sector are likely to be resisted by many traditional higher education leaders who see the applicability of the completion/placement model as limited.

Even within the for-profit world, it’s not a one-size-fits-all approach, said Paula E. Peinovich, president of Walden University, an online, for-profit, regionally accredited doctoral institution. Walden, which Peinovich said primarily attracts individuals who are already employed, does not include placement rates in its self-assessment system, but instead focuses on a variety of measures that include, for instance, assessing the performance of K-12 students taught by their education degree graduates. Assessment tools need to be geared toward an institution’s mission, Peinovich said.

Next Story

More from News