You have /5 articles left.
Sign up for a free account or log in.

Much of the recent conversation about accountability in higher education encourages the expectation of national comparisons, similar to the profitable activities of U.S. News and its many rankings imitators. Within the calls for measures of improved student performance or student achievement lurks the shadow of relative comparisons among institutions. Even when national organizations representing higher education offer cautions about this form of comparative analysis, everyone with any experience with institutional measurement knows that data points generated on a national basis will be used to compare institutions, whether the institutions are comparable or not or the data are reliable or not. We will surely get the “Top Ten” national performers on some measure selected through a political process as the litmus test of student achievement.

Even knowing that there’s something quixotic about taking on this issue, Reality Check can’t help it. The design of performance measurements, effective and sensitive to the composition and characteristics of the institutions and individuals being measured, is surely one of the most dramatic challenges of the academic industry. Our difficulties reflect mostly the complexity of the industry rather than any lack of sophistication or wisdom on the part of the academic community -- and much of the focus in this debate has been on the issue of student performance, one of the most complicated and difficult-to-define measurement universes in academe.

A clear look at a much easier measure illustrates the difficulty of interpreting simple measures of institutional performance. Most research universities track their competitiveness by closely following the annual data reported by the NSF on federal research expenditures. These data have many virtues for those interested in university performance. For the most part, they reflect peer reviewed research productivity. Measuring expenditures rather than awards evens out the distorting effect of large multi-year grants, and the data are reported by an impartial national agency.  Still, comparisons of research performance among universities based on federal research expenditures suffer from many of the common defects of other cross-institution evaluation. 

Often, small university systems will inappropriately compare themselves to large single universities to illustrate the value of their bureaucratic and political organization of educational enterprise rather than their campus based academic productivity. Rather than reporting on the performance of the Amherst campus, for example, we may get a report on the University of Massachusetts system that includes five campuses, one of which is a stand-alone medical school.

Others work diligently to construct comparative analyses that correct for on- and off-campus activities so that the universe compared is composed only of single-campus institutional enterprises, in these studies (such as our own Top American Research Universities) the comparison is between UMass Amherst and Indiana University Bloomington, rather than the IU system and the UMass system. Even this, which has proved particularly useful for many students of university performance, obscures some fundamental difficulties in comparison.

Some university campuses contain research-intensive medical centers while other universities do not, either because there is a related medical center at some distance, or because the institution has never been involved with a medical center. While such analyses as the one I’ve done with Elizabeth D. Capaldi, et al., in The Top American Research Universities, show the federal research performance of single campus institutions, whatever their political organization or whether they include a medical school, we still must confront the problem of comparing dissimilar institutions.

To assess the medical school bias in the research data, we analyzed TheCenter’s single campus institutions by removing the federal research expenditures attributable to medical schools resident on these campuses. The results of this analysis, available on line as Deconstructing University Rankings: Medicine and Engineering, and Single Campus Research Competitiveness, 2005, demonstrates the sensitivity of university research ranking data to the composition of an institution’s academic programs. Although some observers assume that medical schools produce a significant impact on institutional research productivity, this is not always so. Some medical schools are indeed research intensive and compete successfully for significant amounts of federal peer reviewed support contributing to the research ranking of their campus. Other medical schools, however, focus primarily on medical education, and their contribution to their host campus’s research activity is modest.

Simple assumptions about how all these variables affect the measures of institutional performance often do not hold.  If we take The Top American Research University federal research expenditure rankings and re-rank the institutions without including the medical school federal research component in the campus totals, we get a much different hierarchy.

The Massachusetts Institute of Technology, for example, which ranked 11th on federal research over all, rises to second place when compared to all institutions with federal medical school research expenditures removed. CalTech rises from 27th to 11th, Berkeley from 23rd to 9th. The University of Michigan, however, ranks third in the stack whether compared to all institutions or compared to all institutions with federal medical school research expenditures removed. The University of Texas at Austin, which ranks 24th in federal research expenditures, rises to 10th when compared to all universities with the medical school expenditures removed; the University of California at Los Angeles falls from 5th to 23rd; the University of Pennsylvania falls from 6th place to 48th; Duke from 15th to 48th; Yale from 16th to 73rd.

What’s the moral of this story? Are Duke, UCLA, Yale or Penn less distinguished because of this rearrangement of the data, this redefinition of the criteria for measurement?  Not at all, for nothing has happened to their remarkably productive enterprises. Instead, these data, with the medical school contribution to campus-based research removed, simply take a different perspective on university productivity. 

In the first instance, using all campus-based federal research expenditures, we want to know how competitive the entire campus-based academic enterprise is relative to all other campuses in competing for federal research dollars. The data points and rankings that result from that analysis are valid for that purpose, but they do not measure many other elements of university quality or competitiveness. As pointed out in the various publications on TheCenter Web site, these data tend to underestimate humanities and social science research, underestimate the research productivity of some professional schools, and say nothing about the postgraduation success of the institution’s undergraduates.

At the same time, with the medical expenditures removed, the data illustrate the importance of medical schools for research productivity in many, but not all, major research universities. In addition, these data may offer some suggestive hypotheses about institutional strategy. For example, a university like Michigan may well have invested in high performing research productivity in the biological sciences in both their medical and non-medical colleges and programs; when we remove the medical expenditures, their performance against the competition still ranks third. 

Other institutions, perhaps the University of Pennsylvania, may have concentrated their investment in biomedical research in their medical enterprise, rather than in other departments and programs of the university. In that case, with federal medical expenditures removed, the institution’s ranking on non-medical school based research falls.

Some universities, without medical schools at all, may nonetheless invest substantially in biomedical and related research capability, resulting in the high ranking of MIT whether compared to other institutions with or without medical schools. JohnsHopkinsUniversity ranks first no matter how the data are reported. There is also always the possibility that the universities report their data on federal research in fundamentally different ways, although we have no indication that this is so. For those interested in the methodology and the data, TheCenter provides complete information online.

The moral of this story is that first-rate universities have much different profiles and strategies for accomplishing their missions. Simple data displays can provide useful comparative information on very specific issues, but almost never offer useful global comparisons. Because universities are very complex, have widely varying composition in terms of students, faculty, facilities, funding and missions, and reflect the results of distinct histories and development over many generations, simple institutional comparisons on almost any dimension have only specific and limited value. 

We know that everyone loves to know who is No. 1, but in truth no university is No. 1 over all.  Instead, some of us are good at some things, and others are good at others.  These may overlap, they may conflict, and they may reinforce particular results that some part of our constituency seeks. Sometimes when we see the pursuit of No. 1, we often suspect a device to advance a particular political or academic agenda while avoiding the harder task of improving specific results of individual, widely differing institutions. 

Real accountability comes when we develop specific measures to assess the performance of comparable institutions on the same dimensions. This is hard work, it requires a clear focus, and it requires audited data collected on the same basis for every institution assessed. Programs that combine data measuring multiple types of performance from dissimilar institutions allow colleges and universities to avoid accountability because the results will be invalid. We don’t compare K-Mart and Tiffany just because both are in the business of retail sales. Neither should we compare small private sectarian colleges with Stanford or Duke, even though these are all private institutions producing undergraduate degrees.hn V. Lombardi

Next Story

More from Views