You have /5 articles left.
Sign up for a free account or log in.

Eoneren/E+/Getty Images
“Greetings! You’ve been added to our journal’s editorial system because we believe you would serve as an excellent reviewer of [Unexciting Title] manuscript …”
You probably get these, too. It feels like such emails are propagating. The peer-review system may still be the best we have for academic quality assurance, but it is vulnerable to human overload, preferences and even mood. A result can be low-effort, late or unconstructive reviews, but first the editors must be lucky enough to find someone willing to do a review at all. There should be a better way. Here’s an idea of how to rethink the reviewer allocation process.
The Pressure on Peer Review
As the number of academic papers continues to grow, so do refereeing tasks. Scientists struggle to keep up with increasing demands to publish their own work while also accepting the thankless task of reviewing others’ work. In the wake, low-effort, AI-generated and even plagiarized reviewer reports find fertile ground, feeding a vicious circle that slowly undermines the process. Peer review—the bedrock of scientific quality control—is under pressure.
Editors have been experimenting with ways to rethink the peer-reviewing process. Ideas include paying reviewers, distributing review tasks among multiple reviewers (on project proposals), transparently posting reviews (already an option for some Nature journals) or tracking and giving virtual credits for reviews (as with Publon). However, in one aspect, journals have apparently not experimented a lot: how to assign submitted papers to qualified reviewers.
The standard approach for reviewer selection is to match signed-up referees with submitted papers using a keyword search, the paper’s reference list or the editors’ knowledge of the field and community. Reviewers are invited to review only one paper at a time—but often en masse to secure enough reviews—and if they decline, someone else may be invited. It’s an unproductive process.
Choice in Work Task Allocation Can Improve Performance
Inspired by our ongoing research on giving workers more choice in work task allocation in a manufacturing setting, it struck me that academic referees have limited choices when asked to review a paper for a journal. It’s basically a “yes, I’ll take it” or “no, I won’t.” They are only given the choice of accepting or rejecting one paper from a journal at a time. That seems to be the modus operandi across all disciplines I have encountered.
In our study in a factory context, productivity increased when workers could choose among several job tasks. The manufacturer we worked with had implemented a smartwatch-based work task allocation system: Workers wore smartwatches showing open work tasks that they could accept or reject. In a field experiment, we provided some workers the opportunity to select from a menu of open tasks instead of only one. Our results showed that giving choice improved work performance.
A New Approach: Reviewers’ Choice
Similar to the manufacturing setting, academic reviewers might also do better in a system that empowers them with options. One way to improve peer review may be as simple as presenting potential referees with a few submitted papers’ titles and abstracts to choose from for review.
The benefits of choice in reviewer allocation are realistic: Referees may be more likely to accept a review when asked to select one among several, and their resulting review reports should be more timely and developmental when they are genuinely curious about the topic. For example, reviewers could choose one among a limited set of titles and abstracts that fit their area of domain or methodological expertise.
Taking it further, publishers could consider pooling submissions from several journals in a cross-journal submission and peer-review platform. This could help make the review process focus on the research, not where it’s submitted—aligned with the San Francisco Declaration on Research Assessment. I note that double-blind reviews rather than single-blind may be preferable in such a platform to reduce biases based on affiliations and names.
What Can Go Wrong
In light of the increased pressure on the publishing process, rethinking the peer-review process is important in its own right. However, shifting to an alternative system based on choice introduces a few new challenges. First, there is the risk of authors exposing ideas to a broader set of reviewers, who may be more interested in getting ideas for their next project than engaging in a constructive reviewing process.
Relatedly, if the platform is cross-journal, authors may be hesitant to expose their work to many reviewers in case of rejections. Second, authors may be tempted to use clickbait titles and abstracts—although this may backfire on the authors when reviewers don’t find what they expected in the papers. Third, marginalized or new topics may find no interested reviewers. As in the classic review process, such papers can still be handled by editors in parallel. While there are obstacles that should be considered, testing a solution should be low in risk.
Call to Action
Publishers already have multi-journal submission platforms, making it easier for authors to submit papers to a range of journals or transfer manuscripts between them. Granting more choices to reviewers as well should be technically easy to implement. The simplest way would be to use the current platforms to assign reviewers a low number of papers and ask them to choose one. A downside could be extended turnaround times, so pooling papers across a subset of journals could be beneficial.
For success, the reviewers should be vetted and accept a code of conduct. The journal editors must accept that their journals will be reviewed at the same level and with the same scrutiny as other journals in the pool. Perhaps there could be tit-for-tat guidelines, like completing two constructive reviews or more for each paper an author team submits for review. Such rules could work when there is an economy of scale in journals, reviewers and papers. Editors, who will try it first?