You have /5 articles left.
Sign up for a free account or log in.

If you follow rankings at all, you’ll likely have noticed a fair bit of recent activity going on in the Middle East these days.  US News & World Report, and Quacquarelli Symonds (QS) both published “Best Arab Universities” rankings last year; the Times Higher Education (THE) produced a pilot MENA (Middle East and North Africa) ranking at a glitzy conference in Doha last week (don’t be distracted by terminology – THE’s “Middle East” includes neither Turkey nor Israel nor Iran, so it’s also an Arab ranking)

The reason for this sudden flurry of middle-east oriented rankings is pretty clear: Gulf universities have a lot of money they’d like to use on advertising to bolster their global status and this is one way to do it.  Both THE and QS have already tried to tap this market by creating “emerging economies” or “BRICs” rankings but frankly most Gulf universities didn’t do too well on those metrics, so there was a niche market for something more focused.

But there is a fundamental problem with trying to rank Arab universities.  The Gulf universities and their royal patrons are under the impression that rankings are prestige tools – which of course they are.  But rankings are also transparency tools.  They cannot work unless there is accurate and comparable data which institutions are prepared to share.  Bluntly, there simply isn’t very much of this in the region. 

Let’s take some of the obvious candidates for indicators to be used in rankings:

Expenditures:  This is a classic input variable.  However, an awful lot of Gulf universities are private and won’t want to talk about their expenditures for commercial reasons.  Additionally, some are personal creations of local rulers who spend lavishly on them (for example, Sharjah and Khalifa Universities in UAE); they’d be mortified if the data showed them spending less than the Sheikh next door.  Even in public universities, the issue isn’t straightforward because not all governments want their institutions to be transparent about finances (this is not a strictly Arab problem – China has similar issues in this regard); I’d guess that getting financial data out of Egyptian or Algerian universities would be a pretty unrewarding task.  Finally, for many Gulf universities, cost data can change hugely from one year to the next because of the way compensation works.  Expat teaching staff (in the majority at most Gulf universities) are paid partly in cash and partly in kind through things like free housing, the cost of which swings enormously from one year to the next based on changes in the rental market.

Student Quality: In Canada, the US and Japan, rankings often focus on how smart the students are based on average entering grades, SAT scores, etc.  But those simply don’t work in multi-national rankings of any kind, so those are out.

Student Surveys: In Europe, student surveys are one way that is used to gauge quality.  However, the appetite among Arab elites to allow public institutions to be rated publicly by customer opinion is, shall we say, limited.  So the kinds of satisfaction questions that we see in the German CHE rankings, for instance, are extremely unlikely.  And we can probably count out use of tools like the Collegiate Learning Assessment (which was used in the AHELO exercise) or questions based on the National Survey of Student Engagement (NSSE) because then one would be able to compare Arab universities to institutions in the US and elsewhere, because the comparison would be unlikely to be favourable to the region’s institutions.

Graduate Outcomes.  This is a tough one.  Some MENA universities do have graduate surveys, but what do you measure?  Employment?  How do you account for the fact that female labour market participation varies so much from country to country and that many places – the UAE and Qatar among them - female graduates are either forbidden or discouraged from working by their families? 

So what does that leave?  Mostly, indicators of research intensity.  There are of course measures of research intensity which are widely available and don’t require universities themselves to provide data.  And, no surprise, both US News and the Times Higher have based 100% of their rankings on bibliometrics.  But that’s ludicrous for a couple of reasons.  First, most MENA universities have literally no interest in research.  Outside the Gulf (i.e. Oman, Kuwait, Qatar, Bahrain, UAE and Saudi Arabia) there’s no money available for it.  Within the Gulf, most universities are staffed by expats teaching 4 classes per term with no time or mandate for research.  The only places where serious research is happening are at one or two of the foreign universities which are part of Education City in Doha, and in some of the larger Saudi Universities.  And of course where Saudi universities are concerned, we know at least some of the big ones are furiously gaming publication metrics precisely in order to climb the rankings without actually changing university cultures very much.  The practices of King Abdulaziz University have come in for particular criticism on this score.

QS has tried to use some other types of data to make its ranking: it has some faculty-student ratio data, plus some data from its academic and employer reputation surveys, but given the way those surveys are conducted at a global levels, I suspect there’s precious little usable data in there. (a more focused regional set of surveys might bring some useful data, but then you’re back in part to the problem noted earlier re: student surveys).  It also counts the proportion of professors with PhDs, which in emerging economies might be a useful measure of quality.  It also has “international students” as an indicator, which will have odd effects for some flagship universities in the Gulf given that non-nationals are not allowed to enroll in institutions like the United Arab Emirates University.  

Are there any other indicators which might be useful?  The size of the institution’s graduate programs might be one: it’s easy to count and goes some ways to measuring intensity of advanced study.  But that would still exclude pretty much all the foreign universities in the region.  Other than that?  Not much, I'm afraid.

In sum: it’s easy to understand commercial rankers chasing money in the Gulf.  But given the lack of usable metrics, it’s unlikely their efforts will provide usable insights that might drive institutional improvements, as the best rankings can do.  Instead, what it seems we can expect is rankers just using whatever data is available to create a “horse race” between institutions.   

 

Next Story

Written By