You have /5 articles left.
Sign up for a free account or log in.

It is rankings time again, as everyone interested in colleges and universities know. This annual celebration gives everyone something. It gives the rich elite colleges a way to demonstrate their presumed superiority and it gives everyone else an opportunity to identify the errors, misconceptions and ideological biases that inform the lists. As often observed, when institutions rise in spurious rankings, they publicize the results; when they fall in the same rankings, they critique the methodology. While the debate is entertaining and often significant in substance, the key issues in rankings sometimes get lost.

Rankings are what they are because the American public likes lists. It knows that they are mostly artifacts of wealth and publicity, it understands that the qualities represented may or may not have anything to do with effectiveness of institutions, and it recognizes that the institutions and the rankings organizations may well manipulate the data and the calculations to match anticipated results. Nonetheless, people like to buy the magazines and reports, they like to see who is where and who may have risen or fallen.

Most institutions, except for those ranked No. 1, find fault with the whole process, and while some college leaders have tried to opt out of a competition that is flawed in almost every way, their public wants to see them listed in the top 10, 20, 50, or whatever category matters. Best dressed, most talented, best small college, most popular movie, No. 1 football team, all these designations feed Americans insecurity about their own ability to make a judgment. So they make some choice -- my college, my football team, my favorite movie -- and then seek validation in a pseudo-scientific ranking system.

We can surely complain, try to help the organizations that produce these things do them better, and point out the failings in the different rankings, but since the public wants them, likes them, buys them, and argues about them; clearly the organizations that produce them are delivering a popular item. That the results distort the educational purposes of many institutions, that the structure of most of these rankings reflect wealth, that the issues of prestige and effectiveness are confused in the data simply demonstrate that in the absence of serious academic evaluations of substance and reliability, pop science will fill the void.

Having shunned this marketplace of ideas, having failed to engage the question of measuring institutional effectiveness and value in any meaningful way, and having continued in large measure to voluntarily cooperate with these commercial products, we should be embarrassed to take the high ground that the commercial ratings are deficient in academic seriousness. Actually, it is the academic industry that is deficient in evaluation seriousness; the commercial folks are just doing what they do by selling a popular product to the public in the absence of significant competition.

Next Story