You have /5 articles left.
Sign up for a free account or log in.

Unreliable data, unreliable rankings

A few weeks ago, the Times Higher Education published a ranking of “top attractors of industry funds”.  It’s actually just a re-packaging of data from its major fall rankings exercise – “industry dollars per professors” is one of its thirteen indicators and this is just that indicator published as a standalone ranking.  What’s fascinating is how at odds the results are with published data available from institutions themselves.

Take Ludwig-Maximillans University in Munich, the top university for research income according to THE.  According to the ranking, the university collects a stonking $392,800 in industry income per academic.  But a quick look at the university’s own facts and figures page reveals a different story.  The institution says it receives €148.4 million in “outside funding”.  But over 80% of that is from the EU, the German government or a German government agency.  Only €26.7 million comes from “other sources”.  This is at a university which has 1492 professors.  I make that out to be 17,895 euros per prof.  Unless the THE gets a much different $/€ rate than I do, that’s a long way from $392,800 per professor.  In fact, the only way the THE number makes sense is if you count the entire university budget as “external funding” (1492 profs time $392,800 equals roughly $600M, which is pretty close to the €579 million figure which the university claims as its entire budget). 

Or take Duke, second on the THE list.  According to the rankings, the university collects $287,100 in industry income per faculty member.  Duke’s Facts and Figures page says Duke has 3,428 academic staff.  Multiply that out and you get a shade over $984 million.  But Duke’s financial statements indicate that the total amount of “grants, contracting and similar agreements” from non-government sources is just under $540 million, which would come to $157,000 per prof, or only 54% of what the Times says it is.

The 3rd place school, the Korea Advanced Institute of Science and Technology (KAIST), is difficult to examine because it seems not to publish financial statements or have a “facts & figures” page in English.  However, assuming Wikipedia’s estimate of 1140 academic staff is correct, and we generously interpret the graph on the university’s research statistics page as telling us that 50 of the 279 billion Won in total research expenditures comes from industry, then at current exchange rates that comes to  a shade over $42 million, or $37,000 per academic. Or, one-seventh of what the THE says it is.  

I can’t examine the fourth-placed institution, because Johns Hopkins’ financial statements don’t break out its grant funding by public/private.  But tied for fifth place is my absolute favourite – Anadolou University in Turkey which allegedly has $242,500 in income per professor.  This is difficult to check because Turkish universities do not publish their financial statements (in English at least).  But I can tell you right now that this figure is fantasy. On its facts and figures page, the university claims to have 2,537 academic staff (if you think that’s a lot, keep in mind Anadolu’s claim to fame is as a distance ed university – it has 2.7 million registered students in addition to the 30,000 or so it has on its physical campus, roughly half of whom are “active”). For both numbers to be true, Anadolu would have to be pulling in $615 million/year in private funding, and that simply strains credulity.  Certainly, Anadolu has quite a bit of business income – a University World News article from 2008 suggests that it was pulling in $176 million per year from private sources (impressive, but less than a third of what is implied by the THE numbers), but much of that seems to come from what we would typically call “ancillary enterprises” - that is, businesses owned by the university - rather than  external investment from the private sector.  

I could go through the rest of the top ten, but you get the picture.  If only a couple of hours of googling on my part can throw up questions like this, then you have to wonder how bad the rest of the data is.   In fact, the only university in the top ten where the THE number might be something close to legit is that for Wageningen University in the Netherlands.  This university lists €101.7 million in “contract research”, and has 587 professors.  That comes out to a shade over €173,000 or about $195,000 per professor which is at least spitting distance from the $242,000 claimed by THE.  The problem is, it’s not clear from any Wageningen documentation I’ve been able to find how much of that contract research is actually private sector.  So it may be close to accurate, or it may be completely off.

The problem here is one common to all rankings systems that rely on institutions to submit data.  It’s not that the Times Higher is making anything up, and it’s not that institutions are (necessarily) telling fibs.  It’s that if you hand out a questionnaire to a couple of thousand institutions who for reasons of local administrative practice define and measure data in many different ways and ask for data on indicators which do not have a single obvious response (think “number of professors”: do you include clinicians?  Part-time profs?  Emeritus professors?), you’re likely to get data which isn’t really comparable.  And if you don’t take the time to verify and check these things (which the THE doesn’t, it just gets the university to sign a piece of paper “verifying that all data submitted are true”), you’re going to end up printing nonsense.  

Because THE publishes this data as a ratio of two indicators (industry income and academic staff) but does not publish the indicators themselves, it’s impossible for anyone to clear up where these distortions might be coming from. Are universities overstating certain types of income, or understating the number of professors?  We don’t know.  There might be innocent explanations for these things – differences of interpretation that could be corrected over time.  Maybe LMU misunderstood what was meant by “outside revenue”.  Maybe Duke excluded medical faculty when calculating its number of academics.  Maybe Anadolu excluded its distance ed teachers and included ancillary income.  Who knows?  

If the THE was transparent and the source data was published – as Simon Marginson  called for last week at the meetings of the International Ranking Expert Group - it would be possible to catch and correct errors more easily.  But THE won’t do that because it wants to sell the data back to institutions; the economics of rankings are such that it pays to keep data opaque Similarly, THE could spend more time verifying the data, but chooses not to because it would reduce profitability.  If there’s a problem with the data, they can shrug their shoulders and blame the institutions for having  provided “bad” information.

The only way this problem is ever going to be solved is if institutions themselves start making their THE submissions public, and create a fully open database of institutional characteristics.  Unfortunately, that’s unlikely to happen.  As a result, we’re likely to be stuck with fantasy numbers from rankings using institutional data for quite some time yet.

Alex Usher is the President of Higher Education Strategy Associates and Editor-in-Chief of Global Higher Education Strategy Monitor.

 

Building a comprehensive database of university performance: A Response from Phil Baty of Times Higher Education

These are exciting times. My organisation, Times Higher Education (THE), has been a leading authority on the university sector for five decades. But never in its illustrious history has it been in such a strong position.

After a major investment, and a transformation of our higher education business over the last two years, Times Higher Education is now sitting on one of the world’s most comprehensive databases of the world’s leading research-intensive universities. Earlier this month, we concluded a data collection exercise capturing 135 data points on 1,310 institutions in 93 countries (around 175,000 data points in total).

Add to this the results of our annual Academic Reputation Survey, and last year’s data too, and we are well on the way to reaching our first 1 million data points on the best universities in the world.

The database will of course fuel the globally influential Times Higher Education World University Rankings (the 2016-17 edition will be published in fall 2016), as well as a number of regional and specialist rankings. But after building a dedicated team of data editors, scientists and analysts, we are determined to do much more with the outstanding resource and expertise we now have at our disposal.

We can be pioneers, publishing unique insights using data that has never previously been made public; devising new ways to capture and recognise excellence, beyond the usual metrics around research and reputation; and developing global data definitions to ensure that institutions in the US, for example, can properly benchmark themselves against those in China.

So we were delighted that one of our exploratory innovations—to publish data on universities’ success in attracting research funding from industry—attracted worldwide attention. The data forms one of the 13 indicators used to create our flagship World University Rankings, worth 2.5 per cent of the overall scores. We chose to expose the raw data to public scrutiny, and as we had hoped, it prompted a healthy debate about universities’ “third mission” activities - and in particular how far UK universities were at risk of falling behind in this vital area.

We welcome critical scrutiny of the data itself. After all this is new information, gathered across very different higher education systems and based on developing global data definitions. It is important that it is subjected to peer review and public scrutiny.

The advances we and others have made for a decade or more in gathering publishing data on university performance have not come without criticism. At every step, as we have expanded from research data to cover reputational, financial and teaching metrics, we have sought to provide a platform for open conversation among university professionals. 

But not all criticisms are equal in merit. The latest critique, published on The World Views blog, (above) is not particularly useful. Why? Because it is based on, in the words of the author, “a couple of hours of googling” and the scraping of data from university websites and, indeed, from Wikipedia. This has led to a series of misunderstandings, incorrect assumptions and factual inaccuracies. Hyperbolic in tone, this sort of rhetoric detracts from valid and open discussion of important issues.

It would be easy to rise to the bait when someone claims that THE “doesn’t take the time to verify and check” data reported by universities. Even worse is the insinuation that named institutions may be “telling fibs” when they provide (and sign off) data to THE.

It is better to think about how professional data gathering exercises differ from the sort of desk research open to anyone with an internet connection.

THE’s annual data gathering process has improved year on year, in order to make it easier for universities to participate, while gathering more insightful data. Part of the value added by an independent third party like THE in this process is our consultation on and adoption of definitions that help universities better benchmark themselves against competitors the world over. For example, we have a clear, established definition for a full-time equivalent faculty member, making clear who to count among clinical staff, or how to differentiate between teaching and research faculty. So while our World Views critic cites Wikipedia (with that catch-all caveat “assuming Wikipedia’s estimate is correct”) as the source of his figure of 1,140 staff members at the Korea Advanced Institute of Science and Technology, in fact, the figure supplied to THE under its global data definitions, and signed-off by KAIST, was 905. That’s a big difference for his denominator.

We also put a great deal of time into consulting on and applying weightings to various data sources to minimise distortions. Our goal is to give universities confidence in the picture painted by our data, whether it is published in rankings or provided through our innovative self-service tools. To this end, the THE World University Rankings are based on three key data sources: bibliometric data (taken from Elsevier’s Scopus database and collected entirely independently of any institution in the rankings); reputation data (taken from THE’s annual Academic Reputation Survey, and also collected independently of any institutions); and institutional data collected directly from many hundreds of institutions across the world.

Data collected entirely independently of any institution make up more than two-thirds of the overall score in the world rankings, but to ensure the THE rankings are properly comprehensive and balanced, and cover a wider range of institutional missions than any other global ranking system, we do depend on institutions providing us with simple, institutional data. They provide basic staff and student numbers, including information on gender and nationality, as well as information on income across a range of sources.

Collecting data directly from institutions is one of the most exciting, and most challenging aspects of the project. It provides us with a platform to innovate and develop in the field of institutional performance analysis and benchmarking and break new ground.

Institutional data goes through an exhaustive quality control process: a team of full-time data editors works on a case-by-case basis with each institution providing data, through named, personal contacts; there are clearly defined and established data definitions for all data points, translated into multiple languages; where public data exists, the data is verifying against the public record; our software flags up any significant data variations on previous years, allowing a personal intervention to check and verify submissions; most significantly, every institution signs off all data as being accurate.

Times Higher Education has served the global higher education community across five decades, based on partnership and trust. This means an open, consultative – and crucially constructive -- dialogue with our friends and critics alike. Our growing portfolio of rankings, and our new data and analytical tools, will be developed for the next five decades based on the same principles.

Phil Baty is editor of Times Higher Education World University Rankings.

Next Story

Written By