You have /5 articles left.
Sign up for a free account or log in.

University rankings have proliferated at all levels, global, regional and national. Various rankings consider various combinations of measures of research excellence, specialization expertise, student admissions and options, award numbers, internationalization, graduate employment, industrial linkage, funding and endowment, historical reputation and other criteria. Among these criteria, research often stands out.

At the global level, the Academic Ranking of World Universities (ARWU) started by the Shanghai Jiao Tong University and now maintained by the Shanghai Ranking Consultancy, has provided annual global rankings of universities since 2003, making it the earliest of its kind. ARWU is known for “relying solely on research indicators,” and among other indicators, ARWU includes the number of articles published by Nature or Science and the number of Nobel Prize winners and Fields Medalists. The QS World University Rankings, produced by Quacquarelli Symonds, published since 2004, has incorporated scholars/academics survey data, citation per faculty member data (obtained from Thomson, now Thomson Reuters), faculty/student ratios, and international staff and student numbers. The scholars/academics survey data are worth 40% of an institution’s total score, and the citations measures are worth 20%. Times Higher Education (THE), published the annual Times Higher Education–QS World University Rankings in association with QS from 2004 to 2009. Thereafter, THE broke with QS and joined Thomson Reuters to provide yet a new set of world university rankings. THE methodology include 13 separate performance indicators (increased from 6 measures employed between 2004 and 2009), which are grouped under five broad overall indicators to produce the final ranking.

The ARWU, the QS World University Rankings and the THE World University Rankings were widely recognized as three main international university rankings until US News started producing its Best Global Universities Rankings in 2014. Powered by Thomson Reuters InCitesTM research analytics solutions, the US News rankings focus on the research power and faculty resources for students. In addition, Taiwan’s Higher Education Evaluation and Accreditation Council (HEEACT) produces the Performance Ranking of Scientific Papers for World Universities from 2007, which measures research performance of the universities, and is renamed as National Taiwan University Ranking after 2012. The Russian rating agency RatER publishes a Global University Ranking that pools universities from ARWU, HEEACT, Times-QS and Webometrics (an assessment of the scholarly contents, visibility and impact of universities on the web, maintained by the Spanish National Research Council) and measures their academic performance, research performance, faculty expertise, resource availability, and international activities etc. Other similar initiatives include Australia’s High Impact Universities Research Performance Index, the Dutch Leiden Ranking, EU’s U-Multirank, Turkey’s University Ranking by Academic Performance, Spain’s SCImago Institutions Rankings, Reuters World’s Top 100 Innovative Universities, and the Nature Index. This list will likely continue to grow, but with a notable lack of variation in measures applied and consistent emphasis on research performance.

The QS World University Rankings publishes regional rankings in Asia and Latin America. The European Commission compiles a list of the European universities with the highest research impact. National rankings are extensively exercised in some 30 countries in Asia, Europe, North America, South America and Oceania: China, India, Japan, Pakistan, Philippines, South Korea, Thailand; Austria, Belgium, Demark, France, Germany, Ireland, Italy, Macedonia, Netherlands, Poland, Romania, Russia, Sweden, Switzerland, Ukraine, the UK; Canada, Mexico, the US; Argentina, Brazil, Chile; and Australia.

 

The controversies around university rankings

University rankings are criticized on many fronts—for bias towards the natural sciences and English language science journals; for emphasizing research expenditures (such as grants and contracts) as the prime measure of scientific accomplishments rather than the importance and impact of scientific discoveries or the depth of the ideas; for not taking into account important activities of the university (e.g., teaching quality) that are less easily measured. Put succinctly, university rankings are largely based on what can be measured rather than what is necessarily relevant and important to the university.

Rankings are often sensitive to relatively small changes in weighted functions and such small changes can alter the ranking results from year to year without requiring substantial changes in a university. Furthermore, university rankings encourage homogeneity among institutions. Regardless, university rankings are growing and expanding worldwide. While scholars continue to challenge their usefulness, we still live with them and are influenced by them. Indeed, rankings coincide with the rising demand for transparency and accountability in higher education.

 

What shall we do about university rankings?

We need to make sure that all the stakeholders have a fair understanding, an appropriate stance and a reasonable reaction to university rankings. For academia, apart from challenging university rankings, it is an urgent task for scholars to develop instruments and procedures that can measure the effectiveness of teaching and learning, as well as other social functions of the university, such as community engagement. The proliferation of university rankings clearly shows that, unless alternatives are in place, the university rankings will continue to grow and put continuing pressure on universities to mimic research-intensive peers at the expense of attention to student learning. For universities, it is essential to limit the use rankings as a benchmark of strength while continuing to develop “specialness,” rather than succumb to mimicry. For governments, it is necessary to limit the use of rankings as the sole criteria for resource allocation. Preferential funding will encourage “academic drift” and reinforce hierarchy of universities, and that has a profound impact on social hierarchy.

To sum up, university rankings are a response to the demand for social accountability, but their methodology and toolbox must be improved in order to capture a wide range of university functions. In the meantime, current university rankings should be used with caution, in particular when being articulated with funding formulas.

 

Conclusion

Though with flaws, university rankings will continue to prosper, yet their outcomes must be used carefully for limited purposes. More and more colleagues in academic community now don’t like university rankings, but let me use a metaphor to make my point. As individuals in the society, we all have a credit score, whether we like it or not, know it or not. However, our credit scores are limited to use for very specific purposes. So should ranking scores for universities. 

Next Story

Written By