You have /5 articles left.
Sign up for a free account or log in.

So there’s some excitement being generated this month with respect to the OECD’s Assessment of Higher Education Learning Outcomes (AHELO).  Roughly speaking, AHELO is the higher education equivalent of the Programme for International Student Assessment (PISA), or the Program for International Assessment of Adult Competencies (PIAAC).  It consists of a general test of critical thinking skills (based on the Collegiate Learning Assessment), plus a couple of subject-matter tests that test competencies in specific disciplines.  AHELO completed its pilot phase a couple of years ago, and OECD is now looking to move this to a full-blown regular survey.  

Not everyone is keen on this.  In fact, OECD appears to moving ahead with this despite extremely tepid support among OECD education ministers, which is somewhat unusual.  Critics make a number of points against AHELO, which mostly boil down to: a) it’s too expensive, and it’s taking resources away from other more important OECD efforts in higher education; b) low-stakes testing generally is of dubious value; and, c) intrinsically, trying to measure student outcomes internationally is invalid because curricula vary so much from place to place.

The critics have half a point.  It’s quite true that AHELO is expensive and is crowding-out other OECD activities and it’s not entirely clear why OECD is burning this much political capital on a project with so little ministerial support.  While there is real benefit to outcomes measurement, there are also benefits to other kinds of work the OECD could do as well.  It’s not just the costs – it’s the opportunity costs as well.

The criticism with respect to low-stakes testing (basically, students won’t try very hard at tests that don’t count towards a grade and so scores on such tests are not a valid means of measuring competence) has some force to it.  On the other hand, if the purpose of the tests is to compare students in place X with those in place Y, that’s really only a valid critique if you think some students are not affected by the low-stakes nature of the testing.  Otherwise it’s an equal handicap to all students and thus shouldn’t affect the comparisons.  After all , PISA and PIAAC  have both become huge successes, helping to inform policy around the world, despite the fact that they are equally “low-stakes”.  And as for the different curricula: that’s the point.  Part of what government wants to know is whether or not what is being taught in universities is bringing students up to international standards of competency.  One could of course quibble with the notion of international standards of competency in some fields, but that’s a different issue.

But what’s most notable about the charge against AHELO are the people who are against it.  In the main, it’s associations of universities in rich countries, such as the American Council on Education, Universities Canada, and their counterparts in the UK and Europe.  And make no mistake, they are not doing so because they think there are better ways to compare outcomes.  None of the opponents have come forward and said “we have some alternative ideas about how to do outcomes assessment”.  Quite simply, these folks do not want comparisons to be made.

This wouldn’t be such a terrible position to take if it were done because universities dislike comparisons based on untested or iffy social science.  But of course, that’s not the case. Top universities are more than happy to play ball with rankings organizations like Times Higher Education, where the validity of the social science is substantially more questionable than AHELO’s.

Institutional opposition to AHELO, for the most part, plays out the same way as opposition to U-Multirank (which was boycotted by top-ranked schools from the Leading European Research Universities (LERU) on the patently absurd grounds that “it might be turned into a ranking”).  It’s a defence of privilege: top universities know they will do well on the comparisons of prestige and research intensity which are the bread and butter of the major rankings.  They don’t know how they will do on comparisons of teaching and learning.  And so they oppose it, and don’t even bother to suggest ways to improve comparisons.

Is AHELO perfect?  Of course not.  But it’s better than nothing – certainly, it would be an improvement over the mainly input-based rankings that universities participate in now – and can be improved over time.  The opposition of top universities (and their associations) to AHELO is shameful, hypocritical, and self-serving.  They think they can get away with this obstructionism because the politicking is all done behind the scenes – but they deserve to be held to account.

 

Next Story

Written By