You have /5 articles left.
Sign up for a free account or log in.

The 2012-2013 Times Higher Education (THE) World University Rankings were published a couple of months ago.  There were hardly any surprises.  The usual shifting in position among the top 100 universities continues, a few universities climbing up a few ranks, a few others climbing down and a couple sneaking in from the lower 101-200 ranks.

The situation for African universities also remained more or less unchanged.  None was among the first 100.   The University of Cape Town in South Africa as usual topped the African list at rank 113, compared to 103 in 2011-2012.  The remaining three universities in the 201-400 ranks were all South African, although last year and the year before that Alexandria University in Egypt made an appearance in the rankings.  No university from any African country, other than South Africa and Egypt, has ever appeared in the THE Rankings since they started in 2004.  

Unfortunately, many African governments have started to urge their universities to make efforts to be ranked and are making pledges to provide them with the necessary support, without realizing the consequences.  They need to have a closer look at the criteria and indicators used for the rankings to appreciate how inappropriate that would be. 

The THE uses five criteria for its rankings.  One of them is Teaching, carrying a weight of 30%, which is commendable since all global rankings are known to be heavily research-biased.  However, the devil is in the details.  The dominant performance indicator (representing 15% of the overall ranking score) used for Teaching is the results from a reputation survey of worldwide experienced scholars (about 16,600) seeking their perception of the prestige of the university in teaching.  Scholars get known worldwide mainly through their research work, hardly through their teaching, and one wonders if it is not research that really colors their perception of a university’s teaching.  The other indicators are faculty/student ratio (representing 4.5%), which does give a crude indication of the quality of teaching; the proportion of doctoral degrees awarded as a proportion of bachelor’s degrees and as a proportion of faculty, together counting for 8.25%, but it is questionable whether these are indicators of good teaching and learning; and finally, the institutional income per faculty (2.25%), adjusted for purchasing-power parity, aiming to give an indication of the institution’s infrastructure and facilities.  

The second criterion used is Research, counting for another 30% of the overall ranking score.  Here again, the dominant performance indicator (18%) is based on the same survey of scholars mentioned above, this time seeking their views on the university’s reputation for research excellence. This means that, overall, a third of the total ranking score of a university is based on the opinion of some 16,000+ scholars, which introduces a significant degree of subjectivity.  The other indicators are the institutional research income per faculty (6%) and the number of papers published in quality, peer-reviewed journals per faculty (6%).  It should be noted here that any university that publishes less than 200 such papers annually is excluded from the THE Rankings, and this has implications for African universities.

The third criterion is Citations, carrying on its own a score of 30%.  The indicator here is the number of times a university’s publications is cited by scholars.  The information is collected from about 12,000 academic journals published between 2006 and 2010.

The fourth criterion is International Outlook, for which the following indicators are used: the proportion of international students (2.5%) and international faculty (2.5%), and the proportion of the university’s research journal publications that have at least one international co-author (2.5%). 

The last criterion, carrying a weight of 2.5%, is based on the proportion of income from industry per faculty that the university is able to attract.

Any observer of higher education in Africa would immediately realize that African universities, with the exception of a handful, stand no chance of appearing under the THE Rankings; or for that matter under other global university rankings such that the Shanghai Jiao Tong Ranking or the QS World University Rankings, which equally use criteria with a heavy bias on research, publications in international refereed journals and citations.  African universities have to cope with huge student enrolment with limited financial and physical resources. They are short of academic staff, a large proportion of whom do not have a PhD.  Not surprisingly, their research output and performance in postgraduate education are poor.  It is clear that in the rankings race, they are playing on a non-level field. 

But the more pertinent question is: should African universities attempt to be globally ranked?  I believe not.  It would be not only a waste of resources but also inappropriate.  The priority for African universities at the moment should be to provide the skilled manpower required for their country’s development; to undertake research to solve the myriad problems facing Africa and to communicate their findings to the stakeholders in the most appropriate form, not necessarily through publications in international journals; and to engage with their community to meet the Millennium Development Goals and the Education For All targets.  These do not fit the criteria for global rankings.  They do, however, need assistance to improve the quality of their teaching provision, their research output and their service to the community.  Their aim, and that of their government, should be that they be quality assured, not globally ranked. 

Next Story