You have /5 articles left.
Sign up for a free account or log in.

WASHINGTON -- After the generally skeptical and even disappointed response of college officials to the National Research Council's long-awaited rankings of doctoral programs in September, last week's follow-up meeting of NRC officials and college administrators might well have been tense.

But at Friday's convocation on how colleges are analyzing their program data, and the way those data can be improved in the future, it seemed that much of the initial confusion had subsided and that institutions -- at least, the 80 or so whose officials participated either in person or via live webcast -- were finding productive methods to make the data useful.

"In general, we hope this will be the start of a useful dialogue about how the data can be used," said Jeremiah P. Ostriker, a professor of astrophysics at Princeton University and chair of the NRC committee that prepared the rankings. No such gathering took place after the last rankings came out in 1995, and this time the committee wanted to provide more analytic support to the departments included in the survey. Ostriker said the committee planned the session long before the data were released, so it was not a response to the overwhelming confusion over the complicated methodology and largely inconclusive rankings, both of which Ostriker acknowledged Friday.

"The fact that we didn't give them a simple one, two, three was confusing to them at first," he said, but in time they realized the variation in data points makes it impossible to place that much confidence in exact rankings. "That is a fluctuating number, and you have to honestly treat it as a fluctuating number." A range is an honest assessment, he said.

Departments had good reasons to be angered by the final product, some panelists said. The data were collected in the 2005-6 academic year and intended to be released in 2007, so by the time they came out last year, they were well out of date. (That was also a criticism of the 1995 data, though the collection-release gap for those was only two years.)

On top of that, two separate rankings methods delivered vastly different results for some programs, and the rankings, which provided a range in which a program might fall, were far from definitive. For example, the communications program at the University of Michigan, which presented Friday on using the rankings to broaden the scope of its internal review of academic programs, stood either between Nos. 2 and 58 or Nos. 7 and 22, depending on the ranking method. The data collection and reporting process was also extremely expensive, both for colleges and for the NRC.

At the meeting Friday, there was no shortage of criticisms for the NRC. There was the occasional show of frustration from attendees, and a panel of provosts identified areas of the rankings that needed improvement. "I would say unambiguously the effort should continue," said Suzanne Ortega of the University of New Mexico, "but we must find a way to simplify data collection," which is a major drain on resources, she said.

Eric Kaler, provost of the State University of New York at Stony Brook, said the rankings reflect the "real uncertainty" about which elements are most important in programs. It's not possible to determine exact placement, he said, and rankings like U.S. News & World Report (which give colleges and graduate programs specific numerical rankings) don't reflect that. "Clearly a study needs to be done again," said Kaler, who will become president of the University of Minnesota this summer. "If we don't do it ourselves, someone else will."

But he suggested that next time the process be more transparent. Kaler echoed the concerns of many others regarding faculty lists; there was confusion over which faculty members should be included on the NRC lists, resulting in questions about the integrity of the results, which were determined in part by faculty accomplishments. Kaler and others also want the next survey to account for interdisciplinary programs as well.

For the most part, panelists and presenters agreed that the data have been useful in providing a starting point for discussions about the strengths and weaknesses of programs, which can help campus leaders decide which programs most need and deserve improvement, and should receive precious resources.

At the University of Michigan, the NRC data are supplementing internal program reviews by providing a means to compare programs with others inside and outside the institution, said Shelly Conner, assistant dean of Michigan's graduate school. She can identify which programs are outliers, as well as other institutions with which she could share ideas for improvement.

"It provides a quantifiable anchor for conversations around issues of funding, diversity, completion, and time to degree," Conner wrote in her presentation abstract. "We use the data to help us to understand what we are (or are not) doing to improve graduate education."

Committee members could not speculate on when the next NRC survey might take place, or what form it would take. Participants raised questions about the survey's frequency; some suggested data collection would be easier and cheaper if it were to take place every couple years. Others wondered whether the workload should shift away from the NRC, placing more responsibility on either the colleges or a third party that could compile the report.

Charlotte V. Kuh, executive director of the NRC Policy and Global Affairs Division and study director of the rankings project, admitted that she didn't know what to expect from the meeting. But she was pleased that programs are using the rankings to analyze their performance, compare themselves to peers and track their progress. "What we're hearing now," Kuh said, is that "people have gotten beyond the initial shock, and now they're really looking at what the data tell them."

Next Story

More from News