You have /5 articles left.
Sign up for a free account or log in.

Last week, my dean touted our college's rise in the U.S. News & World Report ranking of graduate colleges of education. As the anonymous author of Confessions of a Community College Dean explains, even administrators who dislike rankings have to play the game, and in many ways it's an administrator's job to play cheerleader whenever possible. But as two associations of colleges and universities gear up support for a Voluntary System of Accountability, it's time to look more seriously at what goes into ratings systems.

We all know the limits of the U.S. News rankings. My colleagues work hard and deserve praise, but I suspect faculty in Gainesville do, too, where the University of Florida explained its college of education's drop in the rankings. U.S. News editors rely heavily on grant funding and reputational surveys to list the top 10 or 50 programs in areas they have no substantive knowledge of. That selection is why the University of Florida ranking dropped; the dean recently decided it was a matter of honesty to exclude some grants that came to the college's lab school instead of the main part of the college. (My university does not have a lab school.) But the U.S. News rankings do not honor such decisions. The editors' job is to sell magazines, and if that requires one-dimensional reporting, so be it.

In addition to the standard criticisms of U.S. News, I rarely hear my own impression voiced: the editors are lazy in a fundamental way. They rely on existing data provided by the institutions, circulate a few hundred surveys to gauge reputation, and voila! Rankings and sales.

The most important information on doctoral programs is available to academics and reporters alike, if only we would look: dissertations. My institution now requires all doctoral students to submit dissertations electronically, and within a year, they are available to the world. Even before electronic thesis dissemination, dissertations were microfilmed, and the titles, advisors, and other information about each were available from Dissertations Abstracts International. Every few months, my friend Penny Richards compiles a list of dissertations in our field (history of education) and distributes it to an e-mail list for historians of education.

Anyone can take a further step and read the dissertations that doctoral programs produce. With Google Scholar available now, anyone see if the recent graduates from a program published the research after graduating. With the Web, anyone can see where the graduates go afterwards. All it takes is a little time and gumshoe work ... what we used to call reporting.

But reading dissertations is hard work, and probably far more boring than looking at the statistics that go into the U.S. News rankings. But even while some disciplines debate the value and format of dissertations, it is still the best evidence of what doctoral programs claim to produce: graduates who can conduct rigorous scholarship. (I’m not suggesting people interested in evaluating a program spend weeks reading dissertations cover to cover, but the reality is that it doesn’t take too long with a batch of recent dissertations to get a sense of whether a program is producing original thinkers.)

Suppose the evaluation of doctoral programs required reading a sample of dissertations from the program over the past few years, together with follow-up data on where graduates end up and what happens to the research they conducted. That evaluation would be far more valuable than the U.S. News rankings, both to prospective students and also to the public whose taxes are invested in graduate research programs.

I do not expect U.S. News editors to approve any such project, because their job is to sell magazines and not produce any rigorous external evaluation of higher education. But the annual gap between the U.S. News graduate rankings and the reality on the ground should remind us of what such facile rankings ignore.

That omission glares at me from the Voluntary System of Accountability, created by two of the largest higher-ed associations, the National Association of State Universities and Land-Grant Colleges and the American Association of State Colleges and Universities. In many ways, the VSA project and its compilation of data in a College Portrait comprise a reasonable response to demands for higher-education accountability, until we get to the VSA's pretense at measuring learning outcomes through one of three standardized measures.

What worries me about the VSA is not just the fact that the VSA oversight board includes no professors who currently teach, nor the fact that NASULGC and AASCU chose three measures that have little research support, nor the fact that their choices funnel millions of dollars into the coffers of three test companies in a year when funding for public colleges and universities is dropping.

My greatest concern is the fact that a standardized test fails to meet the legitimate needs of prospective students and their families to know what a college actually does. When making a choice between two performing-arts programs, a young friend of mine would have found the scores of these tests useless. Instead, she made the decision from observing rehearsals at each college, peeking inside the black box of a college classroom.

Nor do employers want fill-in-the-bubble or essay test scores. The Association of American Colleges and Universities sponsored a survey of employers that documented that employers want to see the real work of students in situations that require the evaluation of messy situations and problem-solving. And I doubt that legislators and other policymakers see test statistics as a legitimate measure of learning in programs as disparate as classics, anthropology, physics, and economics. Except for Charles Miller and a few others -- and it is notable that despite the calls for accountability, the Spellings Commission entirely ignored the curriculum -- I suspect legislators will be more concerned about graduation rates and addressing student and parent concerns about college debt.

By picking standardized tests as the first and primary measure of undergraduate learning, the creators of VSA threaten to impose a No Child Left Behind-like system on faculty. I am especially concerned that two-year colleges will slip into test-prep as a substitute for teaching. While language in the Higher Education Act reauthorization bills approved by each house of Congress would forbid this type of imposition from Washington, NASULGC and AASCU would snatch defeat from the jaws of victory. NASULGC and AASCU have also missed an important opportunity to focus on what students do in coursework. When libraries and library consortia are developing the capacity to collect and display student work, NASULGC and AASCU are ignoring the opportunities to show what students create.

Within a few years, most colleges will be able to afford the electronic storage to collect electronic student portfolios. In the same way that anyone can read dissertations that come from my university, anyone could read a capstone paper, watch a presentation, or listen to a senior recital.

No one should pretend that such a system should present a Lake Wobegone portrait: electronic portfolios should show the breadth of student work, warts and all. The public nature of such a system would be a more transparent and effective accountability mechanism than a set of numbers abstracted from an invisible sample of students on a test that few understand. We can do better by starting with the work students produce.

Next Story

Written By

More from Views