You have /5 articles left.
Sign up for a free account or log in.

Some valuable commentary has come out since John Bohannon’s article on dubious open access journals was published in Science.
 
The Library Loon has written several useful posts reflecting on what the sting failed to accomplish, pointing out that “the usual means that academic stakeholders, from tenure-and-promotion committees to collection-development librarians, use to judge journal quality are rapidly proving untrustworthy and gaming-prone.” Yet scaling up the kind of work Bohannon did would overtax everyone involved in scholarly publishing. She asks us to think about alternative ways to figure out which publishers and publications have credibility. That attracted a promising proposal from the Digital Drake, to create a well-edited crowd-sourced scholarly answer to Writer Beware, a volunteer project that operates under the aegis of the Science Fiction and Fantasy Writers of America.

I thought this such an intriguing proposal that I asked around and instantly found dozens of willing volunteers. Developing a process, a set of standards, and having an organizational sponsor that can take the heat (and the letters that might come from lawyers) is a project of its own, but one that I think would be worthwhile. (Jeffrey Beall has been doing something like this for some time, but he doesn’t include tracking the less respectable subscription journals in his remit, and seems to have become partisan, to the point of declaring the serials crisis over.)

The Loon also casts a critical beady eye on the claim that the problem of scammy faux publications would go away if we had open peer review. It’s worth reading her explanation of why that really isn’t a solution at all. She is particularly sharp on the importance of having a system that works for those who are not insiders who already know all the players and have developed a sense of which publishers are solid and which are not. She’s talking about people like our students. Like the general public. Like ourselves, when we stray outside our areas of expertise.

A couple of days ago Stuart Shieber added his take on the situation, issuing one of his always-worth-a-read Occasional Pamphlets on the lessons learned from the Bohannon “sting.”

He believes we should be concerned about the proliferation of scam journals, and commends Bohannon for collection such a lot of detail on these operations and making his data available to other researchers while noting the investigation's limitations. He writes “the problem of faux journal identification may not be as simple as looking at superficial properties of journal web sites” - which is particularly true for non-specialists: our students, the general public, and ourselves when exploring unfamiliar territory. He concludes, as the Loon does, that we need a better way to judge the value of publishing venues, writing “it behooves the larger scientific community to establish infrastructure for helping researchers by systematically and fairly tracking and publicizing information about journals that can help its members with their due diligence.”

How do we know which journals are good? Many years ago, when my little library reviewed our journal subscriptions, we relied on Katz’s Magazines for Libraries, which provided short evaluative reviews of magazines and journals in various disciplines. (That venerable publication continues in the form of reviews of journals on a free site hosted by ProQuest’s Serials Solutions, but there is a limit to how many titles it can cover.) As our subscriptions to individual journals shrank and the titles available in database packages soared, we found ourselves choosing databases with full text content that can change without notice rather than individual journals. We have evaluated and added a handful of subscriptions to individual journals in the past decade, but they are far outnumbered by journals we’ve canceled in order to pay for what’s left.

Yet none of this transition to big packages of content helps faculty figure out where to submit their research, nor does it help our students make good choices as they decide which articles to use in their research. There is no algorithm in library databases for quality or even prestige. We have to make those calls on a case by case basis.

How  could we build an infrastructure for tracking journal quality? How do you decide which journals to submit to or which to recommend to students as particularly valuable journals? Is there a way we could capture all those micro-judgments scholars make routinely in some kind of system? One that can’t be easily gamed?

I’m eager for suggestions.  

 

Next Story

Written By