You have /5 articles left.
Sign up for a free account or log in.

The National Research Council -- responding to criticism it received in the internal peer review of its forthcoming doctoral program rankings -- is changing the methodology in a few key places for the long-awaited project.

The changes -- which are not yet final -- are likely to divide the main ranking of each program into two separate rankings -- one based on explicit faculty determinations of which criteria matter in given disciplines, and one based on implicit criteria. Further, the council is likely to release ranges of ratings for a 90 percent "confidence level," not the confidence level target of 50 percent that was in the methodology released last year.

The use of confidence levels means that instead of saying that a given program is the second or eighth or 20th best, the council will instead say that a given program is in a certain range. By raising the confidence level to 90 percent, instead of saying that there is a 50 percent chance that a program is between 20th and 26th, the council will say (to use that hypothetical) that there is a 90 percent chance that a program is between the 15th and 35th best in the nation -- resulting in much broader ranges for the rankings.

The additional changes in methodology -- which was theoretically released in final form in July -- suggest that further delays are likely for the rankings. NRC officials have for about a year now stopped answering questions about the timing of the release, although the ratings are still expected in 2010.

Many graduate program directors and deans are increasingly frustrated by the timing of the project. Data collection for the project (whatever methodology changes are used) started in 2006, with an original schedule for releasing the rankings in 2007. Many programs note that the departure or arrival of a few faculty members who are skilled at landing grants means that some programs may have changed significantly in the years that passed. Further, with many universities looking at trimming graduate programs, some of those who run stellar but threatened programs have been hoping that the NRC rankings would bolster their defenses.

The NRC has not formally announced that it is changing the methodology. But Jeremiah P. Ostriker, chair of the committee overseeing the project and a professor of astronomy at Princeton University, described for Inside Higher Ed the changes that he said are "likely" but not yet certain.

On the question of the ranges to be reported, Ostriker said that the committee has long wanted to avoid the "spurious precision problem" of previous rankings in implying certainty that a given program is a precise number in relation to all others. Given the way programs change constantly, imperfections in information and averages, and a range of other factors, Ostriker said the rankings will be "more accurate" for being presented as a range, and not as a single figure. He noted that "commercial" ranking efforts tend to give a single number, "but that's no excuse for us making the error."

While the idea of giving ranges was part of the methodology released last year, he said that the peer review comments for the rankings (and outside comments) have led him and other committee members to question the idea of giving a range that provides only a 50 percent confidence level, meaning there is also a 50 percent chance that the program is somewhere outside of that range. Peer reviewers found it "confusing" to offer that low a confidence level, so the idea is to increase it to 90 percent, which will have the effect of expanding the range of possibilities.

Ostriker acknowledged that this change will make it more difficult for people to pinpoint exactly where a program stands. But he said that's because it is impossible to do so in any accurate way. "We wanted more honesty and more data and we wanted to be honest about the true uncertainties in rankings," he said. "We hope it doesn't make people unhappy, but if that does make people unhappy, they will need to get used to it."

The other major change likely to be made before the release of the rankings is a division into two of the overall rankings of departments. (There are also subcategory rankings being released on areas such as the student experience.) The overall rankings are based on what Ostriker called "explicit" and "implicit" faculty weights on what matters in departments.

So faculty in various disciplines were asked to weight the relative importance of such factors as average number of faculty publications per capita, average citations per publication, percentage of faculty holding grants, publications and so forth. The idea is that some factors (such as landing grants) are more important in some disciplines than others, with science disciplines focusing more on grants and some humanities disciplines valuing books as a sign of scholarly eminence, for instance.

The explicit ranking was based on figuring out how a discipline values the various factors and then punching in the data for individual departments, weighted by disciplinary priorities. For the implicit ranking, the faculty members were asked to rank departmental quality in their disciplines, institution by institution. Then the NRC worked backwards to see which characteristics could be attributed to highly ranked departments. And then those weights were applied, department by department, for a ranking that was to be averaged with the explicit ranking.

Now, Ostriker said, two separate overall rankings -- one based on the explicit calculation and one on the implicit calculation -- will be released (each with ranges). He said that peer reviewers felt that more information was provided this way than by merging the two figures.

The main difference between the explicit and implicit rankings, Ostriker said, is that while faculty members don't identify program size as a major factor to evaluate, the implicit rankings suggest that faculty members value size. So, generally, larger programs will fare better in the implicit ranking.

Ostriker said he realized some people might just average the two rankings together, but said that the NRC believed it would be more accurate to release two rankings (in addition to the subcategory rankings) than one.

He declined to say when the advisory committee would finalize the methodology changes or approve the release of the rankings. When the then-final methodology was released in July, the rankings were expected to follow within a few months -- and that schedule was already far behind earlier projections.

How the changes will go over with graduate educators is unclear. Several contacted by Inside Higher Ed (some of whom regularly e-mail us to ask that we pester the NRC about timing since they don't want to offend those doing the rankings) said they were surprised by methodology changes at such a late date and that they wished the project could be wrapped up already.

David Shulenburger, vice president of academic affairs at the Association of Public and Land-Grant Universities, said that he didn't think the expanded ranges or the divided main ranking would be troublesome to many universities. He said that graduate programs improve "a single thing at a time" so that departments gain more by comparing their raw data on various issues than focusing on an overall ranking. "It's healthier for us to decide which issues are most important" rather than relying on an overall ranking, he said.

The issue Shulenburger said is more problematic is the passage of time without release of the rankings. "It clearly would have been much more valuable with current data," he said. "It's going to be older. People will use what they have, but there have been changes" in many departments, he said.

Robert Morse, who directs college ratings at U.S. News & World Report (including a graduate program ranking each year), said he also saw serious credibility problems with basing rankings on information that "is getting old and stale."

U.S. News primarily uses "peer evaluations" for graduate school rankings (in essence surveying faculty members on what they think of other programs) although more complicated formulas are used in some fields. Morse acknowledged that critics call his magazine's formula "simplistic," but he questioned whether the NRC was going too far in the opposite direction.

"They seem to be trying to produce something so sophisticated and complicated and nuanced that they think will give it credibility in the marketplace. I just wonder whether anyone's going to understand it," Morse said. "Do you need a Ph.D. to understand it? If you can't understand it, I just wonder whether it's going to be accepted."

Next Story

Written By

More from Traditional-Age