You have /5 articles left.
Sign up for a free account or log in.
Every university has a list of A journals, those it considers to be the most prestigious in its field. Even the journals that rank institutions have such lists, and many universities use them to measure their impact. As a result, academics establish their credentials by publishing in these journals, and universities grant tenure and promotion for the same. Various institutions even pay their professors a bonus (what some people would call a bribe) for publishing in such select journals.
This is warping the scientific process by narrowing the scope of impact to one type of journal, which reaches one type of audience using one type of content and style. The situation became so bad that Randy Schekman, a Nobel laureate in cell physiology, announced in 2013 that his lab would no longer send research papers to what he calls the “luxury” journals of his field -- Nature, Cell and Science -- because of their distortive encouragement of research that pursues trendy and mainstream lines of inquiry instead of more self-directed and innovative directions.
I have seen that firsthand, working with junior faculty who say they cannot publish in a particular journal because it is not on their institution’s A list and therefore will “not count” toward their accomplishments. This is anti-intellectual. As Russell Jacoby warned in his book The Last Intellectuals, it “registers not the needs of truth but academic empire building.” Academic publishing is becoming more about establishing a pecking order and less about pursuing knowledge. And that has several unintended consequences.
A limited audience. It is time to recalibrate our research norms over who we are trying to reach with our work, to re-examine our notions of impact through outlet and audience. A good research portfolio has a mix of A and B journals, each used for its own purpose. The target of A journals is typically a narrow audience of other disciplinary academics. But that misses entire swaths of audiences. Many B journals reach a broader set of academics, many with a more empirical focus. And some journals reach beyond the walls of academe to speak to policy makers, nongovernmental organizations, businesses or the general public. Further, they are not all traditional outlets. Blogs and other forms of social media are now becoming part of the academic portfolio.
Does our work actually result in real-world change? In the A journals, that is a question that is rarely, if ever, asked. Many academics, in fact, would argue that the question is irrelevant to their pursuit of knowledge. But certainly our work is meant for more. In a recent decision to include social media and digital activities in its criteria matrix for academic advancement, the Mayo Clinic's Academic Appointments and Promotions Committee announced, "The moral and societal duty of an academic health-care provider is to advance science, improve the care of his/her patients and share knowledge. A very important part of this role requires physicians to participate in public debate, responsibly influence opinion and help our patients navigate the complexities of health care." This is a compelling challenge to move away from a narrow focus on A journals.
Less creative and diverse research. Beyond audience, publishing only in A journals can limit creativity and diversity, as they are one type of channel with one set of criteria for what constitutes “good” research. But is that the only criterion?
In some fields (such as mine, management), the A journals are generally theory driven, whereas the B journals are generally phenomena driven. That has led Donald C. Hambrick to offer the critique that the former have a “theory fetish,” where practical relevance takes a backseat to theoretical rigor, and empirical evidence is used to inform theory, rather than the other way around. As papers go through the review process, he warned, “The straightforward beauty of the original research idea will probably be largely lost. In its place will be what we too often see in our journals and what undoubtedly puts nonscholars off: a contorted, misshapen, inelegant product, in which an inherently interesting phenomenon has been subjugated to an ill-fitting theoretical framework.”
Hambrick continues, “In academic management we have allowed obsession with theory to compromise the larger goal of understanding. Most important, perhaps, it prevents the reporting of rich detail about interesting phenomena for which no theory yet exists but which, once reported, might stimulate the search for an explanation.”
These are the foibles in the management A journals, but each discipline has its own issues. In the A journals of any field, what constitutes good research is only that which propels the research tracks of the moment. It blinds the field to the interesting ideas that may lie outside those tracks, and only a few brave scholars would deviate from those tracks for fear of risking tenure.
Yet such nonconformity can lead to real payoff. For example, Paul Krugman, Nobel laureate in economics, published some of his best papers in B journals because, he told me, “They were rejected by A journals!”
Krugman’s story is a cautionary tale for young academics in the midst of the great explosion of publishing outlets. Today, there are just under two million articles published annually in an estimated 28,000 journals. Some are in what are considered A journals, but the vast majority are in B journals. Add to that growing landscape the world of social media. Many academics are now using blogs to test and crowdsource their ideas with peers and the general public. In short, future academics can publish in a broad portfolio of outlets to increase the creativity and impact of their life’s work.
Guaranteed irrelevance. How long does it take between submission and publication of an article? One study found that publication lags range from nine to 18 months, with the shortest overall delays occurring in science, technology and medical fields and the longest in social science, arts/humanities and business/economics. Such long lag times virtually guarantee the practical irrelevance of a paper’s research.
Moreover, as the number of researchers and papers grows over time -- according to another study, the number of scholarly papers is growing at a rate of 3.26 percent per year, or doubling every 20 years -- you could fairly hypothesize that much this growing volume of research will be aimed at the short and fairly static list of A journals, thus leading to ever-longer publishing lag times.
As this lag time increases, think about the number of hours an average academic will spend over the course of the one to four years necessary to publish an A paper. One study estimated that the cost of a single scholarly article written by business school professors was as much as $400,000.
Is that really the best use of so much high-powered mental capacity? Is the outcome and payback really appropriate to the effort? How could that time be better spent? In some cases, the same paper could be submitted to a B journal, accepted and published more quickly, with time remaining to disseminate the results in a blog, a media interview or some other format -- and with the next paper begun.
Questionable impact. Regardless of such sobering statistics, academics are still directed to pursue the A journal for academic status. And that pursuit disregards another sobering statistic on who actually reads them. We can take this issue in two parts.
First, let’s consider a journal’s impact factor, which is the ratio of (a) the number of citations in the current year to articles published in the previous two years divided by (b) the number of substantive articles and reviews published in the same two years. So an impact factor of 5.3 for a top-tier A journal in my field, Administrative Science Quarterly, means that the average paper is cited 5.3 times annually over its first two years. The five-year impact factor only raises that number to 7.5. Is that real impact?
Looking more deeply, the distribution is not normal, leading to what some call the 80/20 phenomenon, where 20 percent of articles may account for 80 percent of citations. A 2005 editorial in Nature noted that 89 percent of the journal’s impact factor of 32.2 could be attributed to 25 percent of the papers published during that time period. In a larger study, only 0.5 percent of 38 million articles cited from 1900 to 2005 were cited more than 200 times.
And that leads to the second way to look at the question. Citation counts are our primary measure of a paper’s scholarly impact, and yet citation counts on average are distressingly low. By one count, 12 percent of medicine articles were never cited, nor were 27 percent of natural science papers, 32 percent in the social sciences and 82 percent in the humanities. Another study found that 59 percent of articles in the top science and social-science journals were not cited in the period from 2002 to 2006. It is time to question our primary reliance on citations and journal impact factors for measuring impact.
B journals that reach nonacademic audiences are cited much less by academics (if at all) and are therefore ignored as having impact. Further, social media is starting to enter the academic portfolio and is again ignored, even though increasing numbers of the public, politicians and even fellow academics find their information about science there. How does a blog with a half million views compare in impact to the average academic paper that was cited only 10.81 times between 2000 and 2010 (that number drops to only 4.67 for the social sciences), according to Thomson Reuters?
Further, some preliminary research is beginning to show a positive value from social media, like Twitter, for increasing visibility (even citation counts) for academic papers. And some organizations, like the American Sociological Association, are exploring metrics and models for rigorously measuring the impact of alterative outlets. It is time to reconsider whom we are trying to reach and how we measure the extent to which we are reaching them.
What Are We Becoming?
In 1963, Bernard Forscher published a letter in Science magazine, lamenting that academic scholarship had become fixated on generating lots of pieces of knowledge -- bricks -- and was far less concerned with putting them together into a cohesive whole. In time, he worried, brick making would become an end in itself.
Perhaps his critique has now come true. We are becoming a field of brick makers, and the narrow focus on A journals is one factor among several that is helping to guide us there. That is truly dangerous as we may, as a result, be courting irrelevance. We need to be re-examining how we practice our craft, not challenging the rigor of what we do, but recalibrating and expanding our focus. Returning to the sentiments expressed by the Mayo Clinic: “As clinician educators our job is not to create knowledge obscura, trapped in ivory towers and only accessible to the enlightened; the knowledge we create and manage needs to impact our communities.”