You have /5 articles left.
Sign up for a free account or log in.

Upon signing up for Opinion Outpost, a website on which users take surveys for points that can be redeemed for cash, an untenured philosophy professor took surveys related to toilet paper brands and frozen foods and other sundries. Completing the surveys at $1 to $5 a pop was a good way to make some extra pocket money, explained the professor, who preferred not to be named. Most of the surveys the professor completed through Opinion Outpost did not seem to be particularly high-stakes, but one, in retrospect, was: the QS Global Academic Survey, which counts for 40 percent of the QS World University Rankings, one of three major international university ranking systems. 

That a major university ranker with global influence used a paid survey site to collect responses has raised some eyebrows: Brian Leiter, a professor and director of the Center for Law, Philosophy, and Human Values at the University of Chicago who first posted the professor’s account on his blog, went so far as to call the QS rankings “a fraud on the public.”  Ben Sowter, head of the intelligence unit for the London-based company Quacquarelli Symonds, which creates the rankings, acknowledged that QS contracted with a Utah-based company, Qualtrics, on a trial basis to obtain 400 responses from U.S. academics through a variety of channels, one of which was Opinion Outpost. Sowter said QS has not since that 2012 trial used an approach in which survey respondents received a cash reward, but he defended the general practice of offering modest incentives. “The notion of providing incentives for surveys is not a new one,” he said, adding that this year the company is providing respondents with free downloads of QS reports and that in the past it has entered respondents into drawings for prizes (i.e., an iPad).

“From our perspective, we examine the completeness and the qualifications of the respondents themselves rather than the specific channels that we use, and this has been a trial that resulted in 400 responses out of 46,000 that we use to inform our work,” Sowter said. 

All three of the major global university rankers, the QS rankings as well as the Times Higher Education's World University Rankings and the Shanghai Ranking Consultancy's Academic Ranking of World Universities (ARWU), are regularly criticized. Many educators question the value of rankings and argue that they can measure only a narrow slice of what quality higher education is about. QS’s methodology seems to be particularly controversial, however, due in large part to its greater reliance on reputational surveys than other rankers. Combined with a survey of employers, which counts for 10 percent of the overall ranking, reputational indicators account for half of a university’s QS ranking. By comparison, a university's teaching and research reputation, as gauged by an invitation-only survey of academics, accounts for a third of the Times Higher Education ranking; the ARWU doesn't use reputational surveys at all, relying instead on objective metrics related to citations and publications and the numbers of alumni and faculty winning Nobel Prizes and Fields Medals. Reputational data are "cheap and easy to collect, especially if one is not worrying too much about how respondents are selected,” said Philip G. Altbach, director of Boston College’s Center for International Higher Education and a member of advisory boards for QS’s two main competitors in the global rankings space (and a blogger for Inside Higher Ed).

QS uses a variety of mechanisms to identify academics who will fill out its reputation survey, including buying directories listing contact details for academics and inviting universities to submit names.  Respondents, who are asked to identify up to 10 domestic and 30 international institutions that are producing the best research in their field, cannot list their own universities, but the potential to recruit rankers who view a particular institution favorably came to light recently after the president of University College Cork urged professors to each ask three colleagues from outside the university to register for the survey and -- in effect -- vote for Cork. QS has since changed its policies to prohibit this type of gamesmanship, and issued a statement listing 10 reasons why its rankings “cannot be effectively manipulated.” (Among them: “sign-up screening processes,” “sophisticated anomaly detection algorithms,” and “market-leading sample size." QS describes its survey of academics as the largest of its kind. Details about who the respondents are can be found here.)

Among QS’s most prominent critics is Simon Marginson, a professor of higher education in the Centre for the Study of Higher Education at the University of Melbourne. Marginson divides rankings into three main categories: those that rely wholly on bibliometric or other objective research metrics (the ARWU falls into that category); multi-indicator rankings systems like those produced by Times Higher Education and U.S. News & World Report, which assign weights to various objective and subjective indicators, including reputation surveys, faculty-student ratio, and, in the case of the latter ranking, selectivity in admissions; and a category which he says “is uniquely occupied by QS.  Q.S. simply doesn’t do as good a job as the other rankers that are using multiple indicators.” In addition to the reputational measures, QS bases 20 percent of its ranking on faculty-student ratio, 20 percent on citations per faculty, and 5 percent each on the proportions of international students and faculty, respectively.

Methodology for Major Global Ranking Systems

 “There are a lot more dark spaces and problems with the way they go about it,” said Marginson, who sits on Times Higher Education’s editorial board and the international advisory board for the ARWU, and is an author of the U21 Ranking of National Higher Education Systems. In particular, Marginson cited the fact that fact that half the ranking is based on reputational surveys and the low response rates. (According to Sowter, response rates to the academic reputation survey vary from 2 to 8 percent across the various sources through which respondents are identified, excluding repeat respondents, who tend to reply at higher rates.)  “Essentially what they have done is they’ve got a ranking process which they’ve done very cheaply because they don’t do it very well, and that’s a loss leader for a lot of other business activities,” Marginson said. “It’s a very successful business model but I do think social science wise it’s so weak that you can’t take the results seriously.”

QS, which has offices in five countries, provides a host of services to colleges, including consulting services and recruitment fairs. Critics have also raised questions about the potential for conflict of interest when the same company calculates the ranking and offers consulting services for individual universities, but Sowter said there are strict internal walls preventing one part of the business from bleeding into another.

“We’re mindful of the perception [of conflict of interest], and we’ve established ourselves in such a way to allay any fears,” said Sowter, who pointed out that there are also potential conflicts of interest inherent in a newspaper that ranks colleges and turns around and sells the same colleges advertising. “Ultimately we are a business and we don’t apologize for that, but we’re a business with a mission statement. We have a mission statement to enable motivated people around the world to achieve their potential through educational achievement, career development, and international mobility, and that’s what we’re trying to do here.”

The most controversial of the QS products is the QS Stars rating system, in which universities pay a fee to be audited and awarded up to five stars based on 51 criteria in eight subcategories. The audit fee is $9,850 and universities also pay  an additional annual licensing fee of $6,850 for each of the three years the license is valid, allowing them to use QS's graphics and logos in their promotional materials. This brings the total cost to a participating university to $30,400 for three years. The star ratings are listed right alongside the university rankings on QS’s website – a fact that led the authors of a recent European University Association report on rankings to raise the concern that “While universities are free to decide whether or not to take part in a QS Stars audit, when a good university does not appear to have stars questions arise and university leaders are under pressure to take part in the QS Stars exercise.” Sowter said this was not the intention, which was, rather, to provide information to prospective students who might click on the stars and find out more, but he said that a website redesign is in the works that will feature clearer labeling.

QS urges its clients to promote their star ratings to prospective students, and indeed universities have not been shy about putting out press releases boasting of their star ratings or displaying them on their websites alongside the results of rankings that they didn't have to pay to participate in.  In addition to about 10 or 15 top universities that QS rated for free to establish benchmarks, Sowter said that about 140 universities, in 32 countries, have paid for the service. Forty-nine of the participating universities are in Indonesia, a country whose universities do not crack the top 250 of QS's overall world rankings.

The International Observatory on Academic Ranking and Excellence (IREG), an association made up of ranking organizations and universities, gave its seal of approval last week to QS’s three main university ranking systems – the global rankings as well as its rankings of Latin American and Asian universities -- with the caveat that it be clear that it is not endorsing the stars system, which it has not evaluated.  QS was one of the first two rankers to be "IREG-approved" after undergoing the group’s audit process, which evaluates rankers across 20 criteria; in order to pass the audit, a ranker has to earn at least 60 percent of the maximum score of 180 and be rated no lower than a three, or “adequate,” on each of the criteria. Marginson called the IREG audit “a low bar,” noting via e-mail that the principles on which the audit is based “are too loose, and assume public good intentions (e.g., a non-commercial commitment to accuracy in the interests of the public) that not all rankers necessarily share.” Altbach said that the audit is well-intentioned but “a little bit like the fox guarding the chickens. It's the rankers doing the accreditation of the rankers."

IREG's managing director, Kazimierz Bilanow, said there is great effort made to ensure independence of the audit. A professor who is not affiliated with IREG coordinates the process and the audit committee is composed of independent evaluators; in QS’s case, the committee was led by Tom Parker, from the Institute for Higher Education Policy, in Washington, D.C., and included representatives from universities in Japan, Poland and Saudi Arabia. IREG’s executive committee, which includes representatives of rankings organizations, ultimately votes on whether to accept the audit committee's determination. Bilanow said that one member of the executive committee, Robert Morse, who directs the rankings for U.S. News, QS’s partner organization in the U.S., recused himself from the discussions and decision-making in the case of the QS audit. IREG does not plan to publish the full audit report, but expects to release an executive summary this week. It will not publish individual rankers’ audit scores, so as not to facilitate “a ranking of rankings.”

“I know that QS has come under criticism more than others, but one should also say in their favor the fact that they are going a little further than others in evaluating and ranking universities and schools in other places, say in Asia and Latin America,” Bilanow said. “They’re putting a competitive spirit into the academic community.”

“They've taken their basic model and they’ve expanded it, and they’ve sliced and diced their data; mind you, Times [Higher Education] has done a bit of this as well,” said Ellen Hazelkorn, vice president of research and enterprise and dean of the Graduate Research School at Dublin Institute of Technology and author of Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence (Palgrave Macmillan). “What they’ve done is they’ve seen there’s a market, and as anyone with business acumen would, they have repackaged [their product] for different audiences.”

Hazelkorn said that she wouldn’t necessarily single QS out for criticism. Rather, she sees problems with all of the major rankers’ methodologies, and the big problem in her mind lies in their outsized influence. Increasingly the major global rankings are being used to make policy decisions about the allocation of resources to universities or which institutions students can attend on government scholarship programs.

 "At the end of the day you can say they’re a commercial company; they’re a business. You want to eat McDonalds all day, we’re not telling you it’s the healthiest food but it’s your choice. But the problem is we have policy makers and others making serious decisions about higher education, about resource allocation and related issues based on rankings," she said.

Next Story

More from Global