You have /5 articles left.
Sign up for a free account or log in.

A former adviser to the University of Texas Board of Regents who is aligned with controversial reforms that have been touted by conservative groups and Governor Rick Perry issued a report Tuesday identifying what he called a “faculty productivity gap” at the two chief research institutions in the state.

Using data released by Texas A&M University and the University of Texas, Rick O’Donnell, who had a brief and tumultuous stint as special adviser to the UT regents, broke faculty into five groups -- Dodgers, Coasters, Sherpas, Pioneers or Stars -- based on the number of students they teach (weighted against their costs in salary, benefits and overhead) and the amount of money they brought to their university in external research funding. Full-time administrators who teach a course in addition to their regular duties were excluded.

“The data shows in high relief what anecdotally many have long suspected, that the research university’s employment practices look remarkably like a Himalayan trek, where indigenous Sherpas carry the heavy loads so Western tourists can simply enjoy the view,” O’Donnell wrote in his paper, “Higher Education’s Faculty Productivity Gap: The Cost to Students, Parents & Taxpayers.”

The analysis, which critics derided as biased and based on flawed data and assumptions, is the latest salvo in an ongoing battle in Texas over the role of research universities. The two systems released hundreds of pages of raw data -- which several faculty members said was inaccurate and distorting -- on the teaching loads and the dollar amounts in external grant funding snared by faculty in 2010. Those reports came with the caveat from the systems that the data had not been fully vetted and “cannot yield accurate analysis, interpretations or conclusions.” More recently, UT-Austin critiqued a set of “Seven Breakthrough Solutions” that embraced market-oriented reforms for higher education and were put forth by a conservative think tank, the Texas Public Policy Foundation, which has ties to Governor Perry and where O’Donnell was formerly a senior fellow.

The new report is sure to attract attention, given O'Donnell's political connections and the shots he is taking at prominent universities. But it is almost sure to infuriate faculty members in the humanities and social sciences, many of whom are categorized essentially as lazy because they receive no credit in the study's methodology for any research that doesn't receive external funds.

O’Donnell said his analysis lent credence to decades of anecdotal evidence about low faculty productivity, and he hoped it would spur discussion on campuses and among legislators about why different faculty members have widely disparate workloads and productivity. “The heart of the productivity issue is labor costs,” O’Donnell told Inside Higher Ed, and cited a litany of challenges facing higher education: the high costs of college, disruptive technological innovation and questions about quality. “I hope it spurs people to dig deep. How do we solve these problems and modernize the university?”

Critics, however, did not see O’Donnell’s analysis as an attempt to shine a light on serious issues as much as an exercise in self-vindication by a “disgruntled ex-employee,” said Pamela Willeford (specifying that she was speaking for herself and not on behalf of others). Willeford is a former chair of the Texas Higher Education Coordinating Board and a member of the operating committee of the Texas Coalition for Excellence in Higher Education, which was organized to counter the ideas put forth by those aligned with Perry.

Stressing that she believes higher education can be improved, Willeford argued that the universities' presidents and system chancellors were already working, through such efforts as the blue-ribbon Commission of 125, to better the institutions in ways that will benefit students and the state.

“We think these are very simplistic ideas that are being pushed in a heavy-handed way with an obvious bias of someone who no longer works for the system,” Willeford said of O'Donnell's ideas. “Name-calling like what’s going on in this report ... is certainly not helpful.”

In O'Donnell's nomenclature, “Dodgers" are the least productive faculty because they bring in no external research funding and teach few students. “In essence, they’ve figured out how to dodge any but the most minimal of responsibilities,” O’Donnell wrote.

“Coasters” are senior and tenured faculty who have reduced teaching loads and do not produce significant research funding.

“Sherpas,” on the other hand, are mostly untenured and bear most of the teaching load and carry out little to no research.

The last two categories describe faculty who generate considerable external research money. “Pioneers” are highly productive in research -- especially in science, technology and engineering -- but teach little. “Stars” are highly productive faculty who do considerable teaching and funded research.

Categorizing people in this way, said O’Donnell, was a way of highlighting those whom he said were coasting and not teaching enough, which he argued results in higher college costs and lower educational quality. “I think there’s a lot of evidence that there’s too much research that might be mediocre and too many faculty members who might not be engaging with students,” he said.

But Gregory L. Fenves, dean of the Cockrell School of Engineering at UT-Austin, said that such conclusions were not supported by the facts because the analysis was deeply flawed. Fenves had two chief criticisms. The first was that the analysis didn’t disaggregate tenure-track and tenured faculty from instructors and contingent faculty, who serve very different functions within the university. The second was that research productivity was measured only in terms of external grants. “I don’t think we can have a valid analysis for discussion if it’s based on those two premises,” said Fenves.

O’Donnell said he chose that metric because it was the only one available in the data and that it could be evaluated on an apples-to-apples basis. He added that peer review was embedded in the process, which testified to the worth of the scholarship being funded. He also said he supported the role of research in the humanities and social sciences -- even though it would not be accounted for in his measurement.

Fenves said the biggest problem with using this information in this way was that it confused an input into the system (money raised externally) with an output (scholarly impact). Impact, he said, is typically measured in a researcher’s contributions to journals and delivery of papers at conferences. “I’m not opposed to analysis,” said Fenves. “It’s got to be the right analysis.”

Next Story

Written By

More from News