You have /5 articles left.
Sign up for a free account or log in.

CHICAGO -- Walking from meeting room to meeting room at this week's annual conference of the Higher Learning Commission of the North Central Association of Colleges and Schools, and scanning the program, it was impossible not to be struck by the fact that a good half of the hundreds of sessions have embedded in their titles the words "student outcomes," "assessment," or "accountability."

Granted, that may not be surprising given that, as noted by Steven D. Crow, the commission's departing president, the theme of the meeting was "Finding Common Ground: Accreditation, Assessment and Accountability."

But in session after session -- with titles such as "Starting and Promoting Learning Outcomes: One College's Story," "Faculty Ownership of Academic Assessment," and "Student Learning Outcomes Assessment: Creating Change in Pedagogy") -- officials from colleges (in the aforementioned cases, Indiana's Holy Cross College, Terra State Community College, and Columbia College Chicago) described for eager colleagues their various, diverse attempts to figure out what they want their students to learn and to measure how well they have learned it. The presentations were replete with examples of changes made to curriculums in response to the results of the experiments, and of occasional false starts and missteps that required new tacks.

The volume and earnestness of the efforts was particularly jarring in the context of the mantra that has been heard so much from Education Secretary Margaret Spellings and other policy makers in Washington in the last two years, suggesting that colleges have been doing far too little to measure and make publicly available results about how much, and how effectively, their students are learning.

While participants in the meeting acknowledged that the pressure that Spellings's Commission on the Future of Higher Education and other state and federal scrutiny had played a key role in accelerating campus efforts to measure student outcomes, it was also clear -- since many of the initiatives described in the various sessions were started years ago -- that contrary to some of the rhetoric, colleges have not been sitting idly by while Rome burns (and other nations gain on the United States in educational achievement). The assessment programs featured at the North Central meeting were generally faculty-created with a focus on a particular course or program or college -- not the nationally normed sorts of measurements that the Spellings panel favored.

"It sure seems like they don't know what we're doing out here," Patti Frohrib, director of research and development at Fox Valley Technical College, said of Spellings and other politicians in Washington, echoing a sentiment expressed by many of her colleagues at the meeting. "Some of the criticism is puzzling to us, because we look at our student outcomes all the time."

And yet.... The multiplicity of approaches taken by differing colleges, the fact that so many of them are small, sometimes testing theories or tactics on just a few dozen students, and the inevitable halting nature of the experimentation, drove home the validity of at least part of the recent criticism of higher education by Spellings and others.

At a time when it is widely agreed that American colleges and universities will need to be educating significantly more students and doing so without a significant infusion of additional money, it is likely to take a long time for the many experiments that faculty members and administrators are trying out to improve their outcomes to produce proven methods that can be imitated and implemented on the kind of scale necessary to meet the coming needs. The many individualized efforts also do not address Spellings's plaint -- which many college officials dispute -- that the public needs to be able to compare one college's success with another's, which it cannot do without comparable measurements of student learning.

The meeting "puts the lie to the idea that institutions have not been hard at work for a long while" on figuring out what their students should be learning and how well they're doing so, Crow said in an interview at the conference, his last after 25 years at the country's largest regional accrediting agency. "But whether the individual efforts are all adding up is another matter."

Too Many to Count

The Higher Learning Commission is not only the biggest of the regional accreditors, spanning 19 Midwestern and mountain states, but it is also probably the most diverse, because it has been more willing than other agencies to give its stamp of approval to those for-profit institutions that have sought regional accreditation. So its meeting is probably more broadly indicative of what is happening in higher education than most other comparable gatherings, although it is seen as having pushed assessment somewhat more aggressively than several of its peer accreditors.

Literally scores of sessions at the conference featured deans, professors and others describing the steps they had taken to define what they believed their students needed to know, to measure how well the students had learned it, and to change what or how they teach in response.

Officials at the University of Southern Indiana, for instance, discussed how they had instituted a new system to try to get students who had failed to place out of the lowest development math course into a higher one (Math 100) through a three-week version of the lower-level course. The "rapid review" approach allowed nearly two dozen students in the first cohort to proceed into the higher-level course four weeks into the first semester.

More than 80 percent of them passed it, and nearly 60 percent earned C's or better -- a better outcome than the general population that had tested into Math 100 originally, said Kathy Rodgers, chair of the math department there. The approach saved the students "a semester of tuition and a semester of time," she said. Southern Indiana plans to try to replicate its success this year, and then hopes to expand the program to other courses in math and possibly into other disciplines across the campus, Rodgers said.

In their presentation Monday, officials at Kent State University sought to address the core tension that underscores the debate about student learning: the idea that the sort of "assessment" college leaders and faculty members have long done to help them gauge their own effectiveness for internal purposes can serve the growing external demands for accountability.

Laura L. Davis, associate provost for planning and academic resource management there, noted that the university has worked for seven years -- as part of the Higher Learning Commission's Academic Quality Improvement Program -- to redesign its first-year math and English curriculums, among numerous other efforts designed to improve the university's effectiveness.

In recent years, said Stephane E. Booth, associate provost for academic quality and improvement at Kent, Ohio legislators and the state's new governor have ramped up their demands on Kent State and Ohio's other colleges to prove they are performing well. While many college officials worry that external demands for accountability will force them to engage in practices that will conflict with, rather than reinforce, their own internal educational goals, Booth said Kent State had managed, in general, to mesh the two.

"We try to not just be generating data because someone at the state level is asking for it," she said. "We try to process it in a way that is going to serve the institution.... And many of the things we have undergone internally to improve our own processes and student learning have helped us to be able to respond quickly and agilely to state mandates."

The extent to which the generally embraced practice of internal assessment for colleges' own purposes and the externally mandated accountability movement complement or conflict with each other remains in dispute, even among those who have accepted the Higher Learning Commission's prodding after initially rebuffing it.

Les Garner, president of Iowa's Cornell College and a member of the accrediting group's board, acknowledges that institutions like his resisted the agency's pressure on institutions to take student learning measurement and reporting seriously at the start, because they "didn't know how to do it, and worried about the time it would take, and assumed that the success of our students would speak for itself."

The accreditor's pressure ultimately paid off, Garner said, in helping colleges like Cornell improve its own performance, part of the traditional role of accreditors. He is less sure that the growing pressure on colleges to collect and report data on student learning will have the same effect. "I worry sometimes that with all the time and energy we spend collecting data, we're going to spend more time collecting it than using it" to improve ourselves," Garner said.

Crow has been in the thick of the federal debate over student learning outcomes, and like other regional accreditors, he has been in the middle of the fight between federal officials seeking to hold colleges more accountable from the outside and college officials who prefer methods of assessment that focus on internal improvement. As he surveyed his own group's meeting, which showed colleges to be engaging in vigorous activity that may or may not serve both purposes, Crow seemed not at all sure that a solution to the dilemma is in the offing.

"There are 1,000 twinkling lights," he said, referring to the plethora of disparate approaches to assessment on display at the conference, but "1,000 twinkling lights does not amount to national accountability."

Next Story

Written By

More from News