You have /5 articles left.
Sign up for a free account or log in.

Universities celebrate their achievements in an endless series of public pronouncements. Like the imaginary residents of Lake Wobegon, all universities are above average, all are growing, and all improve. In most cases, these claims of progress rest on a technically accurate foundation:   Applications did increase, the average SAT scores did rise, the amount of financial aid did climb, private gifts did spike upward, and faculty research funding did grow.

No sensible friend of the institution wants to spoil the party by putting these data points of achievement into any kind of comparative context. There is little glory in a reality check.

Still, the overblown claims of achievement often leave audiences wondering how all these universities can be succeeding so well and at the same time appear before their donors and legislators, not to mention their students, in a permanent state of need. This leads to skepticism  and doubt, neither of which is good for the credibility of university people.  It also encourages trustees and others to have unrealistic expectations about the actual growth processes of their institutions.

For example, while applications at a given institution may be up, and everyone cheers, the total pool of applicants for all colleges and universities may be up also.  If a college's number for the years 1998 to 2002 is up by 10 percent, it may nonetheless have lost ground since the number of undergraduate students attending college nationally grew by 15 percent in the same period. Growth is surely better than decline, but growth relative to the marketplace for students signals real achievement.

Similar issues affect such markers as test scores. If SAT scores for the freshman class rise by eight points the admissions office should be pleased, but if nationally, among all students, test scores went up by nine points ( as they did between 1998 and 2004 ) the college may have lost ground relative to the marketplace.

An actual example with real data may help. Federal research expenditures provide a key indicator of competitive research performance. Universities usually report increases in this number with pride, and well they should because the competition is fierce. A quick look at the comparative numbers can give us a reality check on whether an increase actually represents an improvement relative to the marketplace. 

Research funding from federal sources is a marketplace of opportunity defined by the amount appropriated to the various federal agencies and the amount they made available to colleges and universities. The top academic institutions control about 90 percent of this pool and compete intensely among themselves for a share. This is the context for understanding the significance of an increase in federal research expenditures.

A review of the research performance of the top 150 institutions reporting federal research expenditures clarifies the meaning of the growth we all celebrate ( TheCenter, 2004). The total pool of dollars captured by these top competitors grew by about 14 percent from 2001 to 2002. While almost all institutions saw an increase in their research performance over this short time, a little over half (88 institutions) met or exceeded the growth of the pool. Almost all the others also increased their research expenditures, but even so, they lost market share to their colleagues in the top 150.

If we take a longer-range perspective, using the data between 1998 and 2002, the pool of funds spent from federal sources by our 150 institutions grew by 45 percent. For a university to keep pace, it would need to grow by 45 percent as well over the same period. Again, about half of our 150 institutions (80) managed to improve by at least this growth rate.  Almost all the remaining institutions also improved over this longer period, but not by enough to stay even with the growth of opportunity.

Even comparative data expressed in percentages can lead us into some confused thinking.  We can imagine that equal percentage growth makes us equally competitive with other universities that have the same percentage growth. This is a charming conceit, but misrepresents the difficulty of the competition.  

At the top of the competition, Johns Hopkins University would need to capture a sufficient increase in federal grants to generate additional spending of over $123 million a year just to stay even with the average total increase from 2001 to 2002 (it did better than that, with 16 percent growth). The No. 150 research university in 2001, the University of Central Florida, would need just over $3 million to meet the 14 percent increase in the total pool.  However, UCF did much better than that, growing by a significant 36 percent. 

Does this mean UCF is outperforming Hopkins? Of course not. JHU added $142 million to its expenditures while UCF added $7.6 million.  

The lesson here, as my colleague at the system office of the State University of New York, Betty Capaldi, reminded me when she suggested this topic, is that we cannot understand the significance of a growth number without placing it within an appropriate comparative context or understanding the relative significance of the growth reported.  

It may be too much to ask universities to clarify the public relations spin that informs their communications with the public, but people who manage on spin usually make the wrong choices.

Next Story

Written By

More from Views