You have /5 articles left.
Sign up for a free account or log in.

Academic publishing is supposed to favor the strongest research -- regardless of who’s producing it. Yet we know that isn’t always true. Various studies suggest that the system leans toward significant or favorable results over null ones, research coming from elite institutions, and male authors over women, for example.

A new study examines another possible kind of bias: whether journals favor research affiliated with their publishing institutions. The short answer is yes.

Guessing that journals published by specific universities might have a slightly lower quality bar for authors who either worked at or earned a Ph.D. from those institutions, the study’s authors looked at citation counts for papers written by authors linked to a journal by work or degree -- or not. On average, articles received nine to 19 fewer Web of Science citations when published in an author’s “home” journal versus a journal to which they weren’t somehow linked, compared to authors who weren’t affiliated with any of the institutions in the study.

The authors acknowledge that using citation counts as a proxy for research quality is a continually vexed issue, since seemingly arbitrary things such as article length, number of authors and order in an issue -- along with other biases -- all can affect citation numbers. And just because more people are citing an article doesn’t necessarily mean it’s better. But the authors say their findings still hold important implications for academic publishing, lest good work go unpublished to make way for lower-quality, in-network papers.

“The results confirm the existence of academic in-group bias, at least in some academic journals,” the study says, borrowing the psychological term for favoritism toward members of one’s own group. “This means that in-group bias could be an important factor underlying the acceptance and publication of academic articles -- or equally, the rejection of articles by out-group authors.”

As for possible explanations, the paper says it could just be simple favoritism. Alternatively, it says, “journal editors may use pedigree as a signal for quality: journals may find it hard to assess the quality of all papers that reach the editor’s desk, and so rely instead on the institutional affiliation of the authors.”

The paper, shared recently on the Social Science Research Network, tracks Web of Science and Google Scholar citation data for articles published in four major political science journals, two with institutional affiliations and two without. The affiliated journals are International Security, housed at Harvard University and published by MIT Press, and World Politics, housed at Princeton University and published by Cambridge University Press. The “control” journals were International Organization and International Studies Quarterly.

All four journals are known to publish high-quality work, the paper says. But the citation data, concerning 1,684 articles published between 2000 and 2015, are evidence of at least some in-group bias in the two publications linked to institutions.

Some 23 percent of the articles in the sample were written by “in-group” members. About 7 percent were by faculty members at either Harvard, the Massachusetts Institute of Technology or Princeton. Some 16 percent were written by academics with Ph.D.s from those institutions.

Across all four journals, papers by in-group authors received nine or 11 more Web of Science citations on average (for faculty and Ph.D.s, respectively) compared to out-group members, suggesting that “in-group members in general produce high-quality research,” the paper says. Interestingly, across both sets of authors, in-group and out, papers published in in-group journals received fewer citations compared to papers published in out-group journals.

Controlling for factors such as paper length, articles published by in-group members in their home journals tend to be cited less often than papers published by these scholars in other journals, and the effect is strongest for scholars associated with MIT or Harvard, according to the paper. The results for Ph.D.s (versus affiliated faculty) were particularly striking: in-group authors publishing in their home journal versus the unaffiliated journals appear to lose nearly 20 Web of Science citations compared to the difference between Ph.D. out-group authors publishing in an in-group journal versus the unaffiliated journals. Google Scholar data are similar.

The authors found, for example, that the average article in World Politics by an author affiliated with Princeton gets 80 Google Scholar citations, while papers by non-Princeton researchers receive roughly 105 Google Scholar citations.

Articles by out-group members, meanwhile, are cited with the same general frequency regardless of where they are published.

“Academic In-Group Bias: An Empirical Examination of the Link Between Author and Journal Affiliation” was written by Yaniv Reingewertz, an assistant professor of public administration and policy at the University of Haifa in Israel, and Carmela Lutmar, a lecturer in international relations at Haifa. Their intent wasn’t to pick on specific journals or political science, but rather to look at academic in-group bias in fields other than law and economics, where it has previously been studied. In a recent write-up of their research for Harvard Business Review, Reingewertz and Lutmar say that “academic in-group bias is general in nature, even if not necessarily large in scope.” Journals might also choose their articles based on other factors than likely citation counts, “such as better suitability with the journals’ scope,” they add. Reingewertz and Lutmar also point out that most journals are not affiliated with a specific institution.

But academic in-group bias, where it exists, can cause harm -- “tilting” tenure and other personnel decisions based on publication data, for example, they say. The authors suggest that such bias can be minimized by putting less weight on publications of in-group members in the home journals and assigning more weight to publications of out-group members.

Another possible approach, Reingewertz and Lutmar say, would be to use a strong double-blind refereeing process by not allowing the editor to see the affiliation of the author.

Perhaps most important is the possible effect of academic in-group bias on “the academic endeavor to advance science,” they note. If articles aren’t published based on merit, “the dissemination of knowledge might be at stake. Having non-meritocratic systems might push out talented individuals, to the detriment of the academic community.”

Thanasis Stengos, University Research Chair in Econometrics at the University of Guelph in Canada, has studied academic publishing and,consistent with some data on the topic, said in-group bias seems to be present in his field, as well. The effect seems particularly strong for more junior academics, however, he said, in that journals seem to want to give local scholars a “head start to establish themselves.” Then, he said, if they’re “worthy,” based on a journal’s typical standards, they’d continue to be published on their own merit.

“It does not pay to keep helping out weak researchers, as these journals would lose their leading ranking,” he said.

Andrew Piper, professor and William Dawson Scholar of Languages, Literatures and Cultures at McGill University in Canada, co-wrote a study last year that found humanities journals favor research from elite institutions. As for in-group bias, Piper said this week that he was confident it exists in the humanities, as well.

"We noticed that local institutions were overrepresented in journals published by those institutions," he said of his own study. At the same time, he said, it's "important to point out that this has often been standard practice in the humanities. Some might see a different ethos at work -- in-group bias in an arts-based context might be called a school of thought or movement. Communities generate new ideas. This is another way to look at this question with a less skeptical eye."

However, Piper said, because these practices are tied to tenure, promotion and the sharing of knowledge, "they become more problematic. But it is worth thinking about without subscribing to the absolute objectivity of journal editing, which never exists, but which the authors assume."

Next Story

More from Books & Publishing