You have /5 articles left.
Sign up for a free account or log in.
Getty Images
It was an unusual press release, to say the least, framed more as a call to arms than a communiqué about a new personnel policy.
“Ghent University is deliberately choosing to step out of the rat race between individuals, departments and universities. We no longer wish to participate in the ranking of people,” said the press release announcing the Ghent’s new policy for evaluating faculty performance, including for tenure.
“It is a common complaint among academic staff that the mountain of paperwork, the cumbersome procedures and the administrative burden have grown to proportions that are barely controllable,” the release from Ghent, which is located in the Flemish region of Belgium, continued. “Furthermore, the academic staff is increasingly put under pressure to count publications, citations and doctorates, on the basis of which funds are being allocated. The intense competition for funding often prevails over any possible collaboration across the boundaries of research groups, faculties and -- why not -- universities. With a new evaluation policy, Ghent University wants to address these concerns and at the same time breathe new life into its career guidance policy. Thus, the university can again become a place where talent feels valued and nurtured.”
The release clearly struck a chord with many international academics and was shared widely on social media. Inside Higher Ed spoke with Ghent’s rector, Rik Van de Walle, about what is changing and why.
Van de Walle said the university is moving from a primarily quantitative system for evaluating faculty performance to a more holistic model. The university has done away with annual task reports in which faculty had to report on plans for the coming year and what they did the year before. And it has moved from conducting evaluations of faculty every two to four years, depending on their rank, to every five years, to create an “evaluation break.”
At the beginning of the five-year period, faculty will have to explain what their goals are, Van de Walle explained, “but we don’t tell them which type of ambition they should go forth with. It’s up to the professors themselves to let us know what they want to do not for the next year but for the next five years. That’s the first major change.”
The second major change, Van de Walle said, is an increased emphasis on coaching. “Every professor gets five people around him or her, we call this an HR committee, but it’s not an administrative committee, it’s five people” -- including the professor’s department head, a senior professor from a related field and a human resources administrator -- “who are coaching, who are guiding the professor through their careers.”
“The third thing we changed -- in the past our evaluations were based very, very strongly on outputs metrics, while now the evaluation will be based on a feedback report coming from professors. So, the professors will have to write down at the end of the five-year period what they are proud of, what they believed they realized during the last five years, and we will not force them to report the number of publications or the number of Ph.Ds. [they supervise] or so on. Just like they had at the beginning, once again they will have the freedom to explain, to tell us what they believe are their major contributions they came up with during the last five years, so it’s really what you could call professor-driven, so to speak. It’s the ambition of the professor that is put on paper at the beginning of the five-year period and it’s the view of the professor at the end of that period on what happened in the last five years that will drive the evaluation at the end.”
The same committee that did the coaching will be doing the evaluating, sending its assessment of faculty performance to the Faculty Board. If the Faculty Board’s assessment is positive, the evaluation process ends there; if it is negative, it goes to university management for a final decision. For tenure-track faculty, the tenure decision will be made at the end of the first five-year evaluation period. For professors who already have tenure, a negative evaluation at a five-year mark will trigger a second evaluation two years later; after two successive negative evaluations, termination is possible (though not automatic).
“I really think it will change the culture in a really drastic way,” Van de Walle said of the new policy. “I think the pressure that people feel, the pressure towards slicing their publications, trying to get five publications out of a bunch of results instead of one major publication -- this is something we really see in practice -- this will disappear, at least partially because it’s not in their interest anymore.”
Before, he said, professors who went up for promotion were expected to deliver minimum numbers of publications, minimum numbers of Ph.D. students they supervised and minimum numbers of research proposals accepted. “And more was always better. Now we got rid of that.”
This approach of de-emphasizing the quantitative metrics is not wholly without risk, both in terms of the impact a drop in some of these metrics could have on Ghent’s standing in international rankings -- in three major world rankings, Ghent is variously ranked the first, second- and third-best university in Belgium -- and, perhaps even more crucially, on its ability to compete with other institutions for funding within the Flemish university system. Van de Walle said that performance on metrics like the number of Ph.D.s awarded, publication output and project funding determine a substantial part of the funding Ghent gets from the government. It is what Van de Walle described as a "closed-envelope system," in which the five Flemish universities compete with one another on these metrics to maximize their share of a fixed amount of funding.
Faculty are waiting to see how the system works in practice, but some who spoke with Inside Higher Ed were positive about the changes.
Ben Derudder, a professor of urban geography at Ghent, said it’s important to understand that in Flanders, university rectors are elected by staff, students and faculty, and that this change (“or better,” he said, “the spirit of this change”) was part of the current leadership team’s campaign platform. “Because tenured [academic] staff have the largest proportion of votes in the election process, it is therefore safe to say the change you are discussing is broadly endorsed,” he said via email.
“I support the change, even though I believe the tangible short-term impacts will be minimal,” Derudder said. “The change has above all an important signal function: it shows an understanding/concern that the output-maximizing focus does not lead to better science and creates all sorts of problems, ranging from Matthew effects in funding allocation" -- Matthew effects are sometimes summarized as the rich get richer while the rest get poorer -- "to well-being in the workplace. I think changes will be minimal because the broader environment still breathes the spirit of more, more, more: major E.U. and Flemish funding agencies still operate in the broader spirit of metrics, so all of us still need to consider this as we navigate how we will be evaluated. Another reason why change will be slow is that my generation has been socialized in this spirit. There may be short-term advantages such as less paperwork (putatively needed to be ‘measured’ all the time), but for me the major pluses are intangible: a spirit of trust, an evaluation system that is part of a broader coaching strategy, an assessment system that is supportive rather than punitive, and perhaps above all its broader signal function.”
“I think it’s a more humane approach,” said Koen Vlassenroot, a professor in political and social sciences in Ghent’s Department for Conflict and Development Studies. “Is it going to really entirely transform academia? I don’t think so, but this is a very, very necessary step to de-quantify academic performance and to turn universities into what they should be -- a healthy environment [to] come up with good ideas and good theories. That’s what we are losing a little bit.”
Jeroen Huisman, a professor of higher education at a research center at Ghent focused on higher education governance, likewise said he “would see it as a change for the better.”
“There are many signals -- also from research -- that contemporary management practices may have swung too much to the extreme of performance indicators, quantitative targets and the wrong types of accountability (including much unnecessary bureaucracy),” Huisman said in an interview over email. “There is very limited evidence that these management practices have been effective. On the contrary, there are clear signs that these regimes affect staff well-being and health negatively (read: stress, anxiety, burnouts). The focus on trust, a shift towards a more qualitative assessment and ex post evaluation all are sound elements of [human resources management] practices that go beyond simply counting on the basis of ill-founded performance indicators. The approach still contains elements of accountability, but it is a form of evaluation that is much more in line with key values of professional accountability, a belief that academics are in principle intrinsically motivated to ‘perform’ well and the nature of academic work (ridden with uncertainties about success of grant applications, success in getting something published in a particular journal, but also uncertainties about whether students will e.g. appreciate a curricular or pedagogical innovation by an enthusiastic teacher). The portfolio approach (staff may perform well in certain areas, but not necessarily in all) offers staff much more leeway to reconstruct their past activities and achievements.”
Huisman described Ghent’s move as “unusual for sure. On the one hand, it may not be a revolutionary change in that there are other universities that have also moved towards a management approach that better fits the values of the academic profession (see e.g. the developments in various countries to create a 'quality culture' related to teaching and learning). Let us also not forget that one cannot totally do without performance indicators. As long as there is competition, humans will look for tools and indicators to decide on who is doing well and not so well. Who would want to watch a professional soccer match if there are no rules that tell us who the winner is? In that sense, the change at Ghent University may not be that radical. On the other hand, it is.”
"Roughly speaking," Huisman continued, "I would see two dominant evaluation practices across different higher education system -- both are actually easy ways out. One is to develop REF-style formats that discipline academics" -- REF being a reference to Britain's Research Excellence Framework, a system for evaluating university research performance. "It is an easy way out, for there will likely be a sufficient number of academics that are willing, some with gnashing teeth, to play that game. The other is to totally do without, which would also be cheap and easy, although not in line with what society would expect from any professional (semi-)publicly funded higher education system. The most courageous, and in that sense somewhat radical, is to be one of the first to swing the pendulum against what seemed to be an irresistible trend of performance management.”
Sarah de Rijcke, a professor of science and evaluation studies and director of the Centre for Science and Technology Studies at Leiden University, in the Netherlands, described the policy change at Ghent as admirable. "The merits of the new procedures are obvious: they encourage greater emphasis on content, relevance, and leadership than traditional evaluation criteria. Of course, there are also potential challenges," she said.
De Rijcke continued, “The introduction of new criteria could create uncertainty among researchers. ‘What will I be assessed on exactly?’ And what if I indeed adopt these local criteria? What if I orient my research to this particular university -- how will others then judge my CV when I apply for a position elsewhere, outside of Ghent or Belgium? This is a relevant concern, seeing that many scientific fields operate on a global scale. Successful implementation will depend on a fit between the new criteria on one hand, and the beliefs and expectations of the research community on the other. Another crucial success factor is the willingness of people in leadership positions to adopt this new system. There is a risk that evaluators will slip back into their ‘comfort zone’ and resort to traditional assessment criteria. These are the criteria that got them into their leading positions in the first place. The new career model requires much more than a policy change: it requires a cultural change.”