You have /5 articles left.
Sign up for a free account or log in.
The more I’ve reflected on President Obama’s plans, as described last week, the less I like them. Rating community colleges assumes that students choose among many; in practice, community colleges are usually defined by geography, and few students have more than two from which to choose, for all practical purposes. Rather than pitting them against each other, it would make more sense to lift all boats.
Having said that, though, I’m quite taken with the chart in this piece from Brookings. Beth Akers and Matthew Chingos did a basic regression analysis using the fifteen largest public universities in America. The chart shows the amount by which each university either overperformed or underperformed its demographics, using six-year graduation rates as the base measure.
It’s an admittedly partial picture, but it gets at the “sabermetrics” I invoked last week. Given the socioeconomic profile of your students, what should your grad rate be? By that measure, the University of Michigan - Ann Arbor and the University of Central Florida have some work to do, but Michigan State and Rutgers are punching above their weight.
Presumably, similar analyses could be run for institutions in different sectors. Which community colleges are doing better than their demographics would lead us to expect? Which four-year public colleges? For that matter -- and this would be very interesting -- which for-profits?
The real issue is what to do with the information once we have it. Certainly I’d want to see multiple measures, since any single number is subject to all sorts of distortion. For example, a graduation rate by itself could reflect excellent teaching or grade inflation or an unusual program mix or an exogenous shock. Ideally we’d have some sort of measure of actual learning. In the absence of that, though, it would help to have a more nuanced blend of metrics that would lessen the impact of any given anomaly.
Then, once we have that, I’d love to see the Feds pony up some money for serious comparative studies. What is, say, Rutgers doing that the U of Michigan isn’t? In my perfect world, the point of that kind of study would be to extract useful lessons. What are the consistently high-performing colleges in each sector doing that their peers could learn from? (Admittedly, the for-profits might not want to participate in that, since they compete with each other. But it’s worth asking.) What are the most impressive community colleges doing that other community colleges could adapt?
That kind of study rarely happens now. The Community College Research Center does heroic and wonderful work, but it’s one place. Papers at the League for Innovation or the AACC tend to be autobiographical success stories; it’s rare to see or hear systematic examination of underperformance. Titles III and V fund some wonderful projects, and some cross-conversation occurs among them, but the comparisons are not systematic, and nobody particularly wants to admit struggling. Gates and Lumina don’t fund comparative work, as far as I’ve seen.
The beauty of an approach like this is twofold. It’s cheap, and it’s egalitarian. It would use documented difference in performance to lift all boats, rather than to decide more efficiently who to starve. In that sense, it’s much truer to the mission of public higher education than a sort of Hobbesian war of each against all. Deploying a squadron of sociologists to improve public higher education in America strikes me as public money well spent. Far better to do that than to set colleges at each other’s throats, gaming statistics to make next year’s payroll.