You have /5 articles left.
Sign up for a free account or log in.

Getty Images

In 1987 the Nobel prize-winning American economist Robert Solow famously observed, “You can see the computer age everywhere but in the productivity statistics.”

Solow was referring to the slow productivity growth of the American economy following the technology boom of the 1970s and ’80s, but some scholars argue that this so-called IT productivity paradox also exists in higher education today. Despite increasing investments in technology in higher education (from spending $765 per student in 2002 to $925 per student in 2013), there is still heated debate over whether these investments are justified.

Now, a recent study in The Journal of Higher Education has found that investments in technology do indeed appear to lead to increases in productivity for institutions -- but not for all institutions in the same way.

“The general logic we see in the scholarship is that investments in technology aren’t going to result in productivity gains,” said the lead author of the study, Justin Ortagus, an assistant professor of higher education administration and policy at the University of Florida. “We think that claims for this productivity paradox may be a little overstated.”

A need for more empirical data to assess the efficacy of technology investments in higher education was a key motivation for the study, said Ortagus. He explained that measuring productivity at colleges and universities is difficult because the outputs of these institutions, much like the institutions themselves, are "multifaceted" and can't easily be boiled down into one metric.

“Colleges and universities often attempt to measure productivity by examining the number of enrolled students, the number of degrees conferred or the number of credit hours awarded,” said the study. “Yet these productivity metrics only measure the teaching component of the institutional mission and fail to consider research and service outputs alongside the teaching mission.”

Taking this into account, Ortagus and his colleagues decided to consider productivity outputs in three areas they considered central to higher education: teaching, research and public service. The researchers looked at teaching productivity by considering the number of students who successfully graduated with bachelor's degrees. The research output was measured by looking at total research funds per student, and public service by the proportion of minority students enrolled.

The authors said this metric measures "at least in part" an institution's service to the public good, because higher education should be "the catalyst for upward mobility and social justice."

An important aspect of the methodology, said Ortagus, was that the researchers included two years of "lag time" to allow for the time it takes staff, faculty members and students to adjust to new technologies. "Not allowing for this time may explain why previous research has found new technologies don't improve productivity," said Ortagus.

"It's an interesting study," said Martin Kurzweil, director of the Educational Transformation Program at Ithaka S+R. “It’s valuable because it points to an impact, and a way of measuring that impact. Now, it’s important to understand why there is an impact. When we find out what is driving that impact, that is the information that will enable institutions to take action and hopefully improve outcomes,” he said. While a lot of institutions make smart technology investments and have seen solid productivity gains, others have made poor investments and “wasted a lot of money,” said Kurzweil.

Phil Ventimiglia, chief innovation officer at Georgia State University, agreed with Ortagus that the idea of an IT productivity paradox in higher education has been overstated. He said he believes the impact of IT spending on productivity has been “muted” because so many institutions are still “very fragmented” in their approach to IT spending. He argued that a decentralized approach could lead to inefficiencies, such as multiple departments buying duplicate software licenses. “I’d like to see a comparison of the productivity data of institutions with centralized IT organizations versus a decentralized approach,” he said.

Generally, Kurzweil said he thought the study’s approach and methodology were sound, but that there were areas where the study could have been expanded. For example, due to data constraints, the study assessed only four-year colleges and universities -- institutions whose outcomes may differ from other types of institutions, said Kurzweil. The choice of outputs to reflect teaching and accessibility outcomes also could have been expanded, said Kurzweil; for example, student income could have been looked at in addition to minority representation.

The paper is frank about its limitations, which it describes as “numerous.” But Ortagus said he hoped nonetheless that the study would “add more nuance” to the conversation about the benefits of technology, which, he said, “universities can’t really afford not to invest in.”

Next Story

More from Teaching & Learning