You have /5 articles left.
Sign up for a free account or log in.

Are editors manipulating citation scores in order to inflate the status of their publications? Are they corrupting the rankings of scholarly journals?

While any allegations about cheating or other academic chicanery are cause for concern, journal rankings to date continue to offer one rough but useful source of information to a wide variety of audiences.

Journal rankings help authors to answer the omnipresent question “Where to publish?” Tenure review committees also use rankings as evidence for visibility, recognition and even quality in the academic review process, especially for junior candidates. For them, journal ranking becomes a proxy when other, more direct measures of recognition and quality are not available. Given that many candidates for tenure have recent publications, journal rankings become a surrogate measure for the eventual visibility of that research.

Yet it is easy to rely unduly on quantitative rating scores. The trouble arises when journal rankings becomes a stand-in for the quality of the research. In many fields, research quality is a multifaceted concept that is not reducible to a single quantitative metric. For example, imposing a single rule -- for example, that top-quartile journals count as “high-quality” journals while others do not -- assigns more weight to journal rankings than they deserve and generates the temptation to inflate journals’ scores.

In an editorial in the journal Research Policy, editor Ben R. Martin voiced his concern that the manipulation of journal impact factors undermines the validity of Thompson/Reuters Journal Citation Reports (JCR). He concludes that “… in light of the ever more devious ruses of editors, the JIF [journal impact factor] indicator has lost most of its credibility.” A journal’s impact factor represents the average number of citations per article. The standard, one-year impact factor is calculated by summing up citations to articles published in a journal within the last year, divided by the number of articles published.

I share the suspicion and unease that many academics feel about excessive reliance on journal impact scores for the purposes of academic evaluation and tenure decisions. Yet, while I am not a fan of impact scores calculated over a one-year period, my research on journal rankings leads me to conclude that Martin’s concerns are overstated.

The two main sources of manipulation that Martin discusses are coercive citations (whereby editors require authors to add citations to the journal in question) and creating a queue of online articles, which artificially inflates the number of citations per published article. While any intentional manipulation of journal rankings is reprehensible, to date the overall effect of this type of behavior in practice is quite limited. I arrive at this sanguine conclusion after exploring a variety of indexes and data sources in a forthcoming assessment of journals in my own field of sociology.

A clear hierarchy of journals in sociology is evident no matter what data source (Web of Science or Google Scholar) one uses. There is a great degree of commonality across measures in describing this gradient, even though many low-ranked journals are bunched together with quite similar scores. Manipulation of one-year data has not altered the overall picture a great deal (at least not yet) because five-year measures yield very similar rankings. And even to manipulate the one-year impact factor, editors would have to insist that new authors cite the most recently published articles in that journal.

Substantively, I doubt that much manipulation in sociology journals occurs because, first, the raw scores have not inflated over time and second, the relative ranking of more than 100 journals has been quite steady. Individual journals here and there have moved up and down slightly, but these changes are much more readily attributable to changes in the level of scholarly interest in particular subfields and editorial choices than to any individual editor’s efforts to game the system.

The main reason I discount concerns about manipulation is that different approaches to journal rankings produce a broadly similar picture of inequality. In my study, I use Google Scholar data to calculate the h-index for journals. This measure focuses on the top-cited articles over an extended time period rather than the average citation in a short time frame. It would not be easy for journal editors to manipulate this measure, even if they were aware of it its use.

Let’s take citations to Martin’s own journal, Research Policy, as an example. I obtain an h of 246 over the period from 2000 to 2015. That means that 246 articles cited at least 246 times have been published in this journal during this time frame. That is an impressive score, exceeding the visibility of the American Economic Review (h=227 over the same time period) and the American Sociological Review (h=162). (I calculated all figures with A. W. Harzing’s 2015 Publish or Perish software using Google Scholar data.)

The h statistics just cited reflect the remarkable visibility of these leading journals. It would be quite difficult to develop strategies to artificially generate enough citations to significantly alter those scores. I prefer the use of h as a measure because it attempts to capture the skewed nature of scientific scholarship. Yet the fact remains that the overall hierarchy of journals is broadly similar whether the h-index or the conventional impact factor is used.

In their 2012 study, Allen W. Wilhite and Eric A. Fong present data of concern regarding the prevalence of coercive citations. The pattern of coercive citation was particularly pronounced in lower-tier journals, and especially in the field of business and management. Yet again, I doubt that the overall journal regime is appreciably altered by dubious editorial gaming stratagems. Wilhite and Fong identify eight journals in which this practice might be common enough to matter (more than 10 reports of coercive rankings), but none of those journals has made its way into the top tier in the field (as measured in the JCR standings). In other words, by dint of a relentless and long-term commitment to manipulation, some third-quartile journals might be able to inch their way into the second quartile by manipulating scores, but that is unlikely to alter the overall contours of the field.

If a significant group of low-visibility journals undertook a major effort to increase their citations, that would make them as a group harder to distinguish from the top journals. In the field of sociology, there is no indication that middle- and lower-tier journals are narrowing the distance from the most frequently cited journals. Indeed, this enduring gap is itself interesting, in that it suggests that search engines are not increasing the visibility of journals to which few individuals subscribe.

At the same time, we need to remember that journal rankings serve as only a rough proxy for visibility and recognition of individual papers. In other words, articles published within the same journal will vary in how often they are cited. In my analysis of 140 sociology journals over the period from 2010 to 2014, most of the 10 most frequently cited papers were not published in the top-ranked journals. Thus, substantial variability in visibility (citations) within journals coexists with broadly stable patterns of inequality between journals.

In addition, the list of top-cited articles is largely impervious to self-citation. It is simply too difficult to cite oneself enough to vault one’s research into this echelon on visibility. For example, the top 10 cited journal articles in sociology from 2010 to 2014 had 400 or more citations. To catapult one’s own paper into this citation stratosphere would require publishing hundreds of papers in just a few years. No one could possibly publish frequently enough and cite themselves regularly enough to affect inclusion in the list of top-cited papers. And anyone prolific enough to implement such a strategy would not need to game the system.

Authors have a natural desire to seek outlets that will enhance the visibility of their research. In the field of sociology, that involves a choice of pursuing the most selective generalist journals, the top journals in each specialty area within the field, the second-tier generalist journals and then other remaining specialty outlets and interdisciplinary journals. The use of journal ranking data may be marginally useful in informing such choices. Other important factors include each journal’s particular focus, its selectivity, turnaround time, policies regarding second and third rounds of revisions, and so on.

Journal rankings are likely to remain with us because such rankings are of interest to so many parties, as research by Wendy Nelson Espeland and Michael Sauder suggests, even while their value is likely to remain contested. Perhaps a clearer recognition of the imprecision inherent in journal rankings will mean that they will be used judiciously, as a complement rather than a substitute for important and difficult academic evaluations. And perhaps the use of a variety of different journal indexes will reduce the temptation to game the system and redirect efforts back toward selecting high-quality research for consideration by the scholarly community.

Next Story

More from Views