You have /5 articles left.
Sign up for a free account or log in.

I find Matt Reed the most consistently worthwhile author to read on IHE, but even he sometimes sends in a dud. In "Pangloss, Plato and Progress,” Reed argues against those skeptical of experimentation in higher ed. The article is far less clear than his usual work, making me wonder if he has any specifics in mind at all -- or if he has too many specifics in mind, and doesn’t want to identify who he’s upset with at his own institution.

Higher education has existed for many hundreds of years, and has made changes that entire time. Every recent decade has seen changes in degree offerings, pedagogical approaches, and strategies for student engagement and support. Experimentation is continuous, as Reed himself has documented in earlier columns, so his tirade against "the keepers of the flame” is disingenuous. We are left to conclude that he’s upset with people opposing specific, perhaps radical, changes. But he does not make this clear.

Faculty often distrust suggestions of change from administration because they have in the past so often led to bad outcomes. Bring to mind one of the most prominent recent changes to the academy, the adjunctification of teaching. This “experiment” was obviously driven by cost, and clearly led to worse outcomes for both students and faculty. So forgive us if eliminating liberal arts majors, eliminating departments, replacing degrees with certificates, and other bold “experiments” we’re now subjected to feel very similar -- worse outcomes for everyone, but cheaper.

Certainly the results of an experiment may not be inevitable, but we have no trust that outcomes will be fairly assessed or the experiment properly designed. When someone designs an experiment, they need to specify the metrics of success. Is lower cost alone an indicator of success, even if combined with worse student learning gains? Administrators tend to have certain metrics (e.g., cost, matriculation, retention) and faculty have other metrics (e.g., student learning, student engagement, workplace quality).

Neither set is complete or obviously better than the other, but it’s clear that “success” is not a uniquely defined thing. Reed ignores that an outcome might be success according to a provost and devastating failure according to a department. He also neglects the obvious problem of controlling for other factors. If a school introduces an intensive first-year experience at the same time it cuts 20 percent of faculty and programs, it would be impossible to judge the success of the former even if we all agreed on a single success metric like first-to-second-year retention.

Reed rightly points out that most colleges in the U.S. "can’t afford experiments that don’t work”, but then somehow forgets this two paragraphs later when he introduces the false dichotomy of (major) experimentation leading to "possible success” and lack of experimentation leading to "certain failure”. There’s a less than 1 percent chance that a drug in pre-clinical study will progress to clinical trials, and about a 10 percent chance that a drug in Phase I clinical trials will get FDA approval. Imagine we had similar success in higher education for radical reform measures. The fingers of a single hand would be enough to count the institutions still standing amidst the rubble of disruption.

Business as usual looks challenging, but Reed presents not the slightest evidence that experimentation will improve those outcomes, if indeed he has any specific experiments in mind. He has written repeatedly of the Baumol effect, which suggests the only way to solve cost issues in higher ed is to remove most of the people from their jobs. While this may lead to success "as a sector,” it’s unclear why Reed thinks faculty and staff should line up to try the experiment of losing their careers.

--David Syphers

Next Story

Found In