You have /5 articles left.
Sign up for a free account or log in.
There have already been skirmishes in what may become, as The Atlantic and others have put it, a “plagiarism war” in academe.
The battle lines began forming in December, after Republican representative Elise Stefanik of New York asked Harvard University president Claudine Gay during a hearing on campus antisemitism whether calling for the genocide of Jews would be a violation of Harvard’s rules governing speech. After Gay ignited controversy by replying, “it can be, depending on the context,” prominent diversity, equity and inclusion opponents quickly published plagiarism allegations against her. House Republicans announced a congressional investigation into Harvard’s handling of the allegations.
Gay submitted corrections to her work and eventually resigned. Business Insider then fired a countervolley of sorts, publishing plagiarism allegations against “celebrity academic” Neri Oxman, the wife of billionaire DEI opponent and prominent Gay critic Bill Ackman. He responded by announcing he’d launch a plagiarism review of all faculty members and others at the Massachusetts Institute of Technology, where Oxman previously worked.
“Why would we stop at MIT?” he later posted on X. “Don’t we have to do a deep dive into academic integrity at Harvard as well? What about Yale, Princeton, Stanford, Penn, Dartmouth? You get the point.”
It’s unclear how extensive the “plagiarism hunting” will be, to cite the phrase from Christopher Rufo, one of the DEI opponents who published the plagiarism allegations against Gay. Rufo posted Jan. 2 that he was “contributing an initial $10,000 to a ‘plagiarism hunting’ fund” to “expose the rot in the Ivy League and restore truth, rather than racialist ideology, as the highest principle in academic life.” People could contribute, he then said, by becoming paid subscribers to his Substack website.
But if Rufo and some of his more than 600,000 X followers, Ackman and his billions, and conservative media and their university sources actually begin crusading against plagiarism—and if mainstream media, faculty members and the left make more plagiarism allegations of their own—just how much plagiarism will they actually root out?
The bracing fact is that no one seems to know how extensive—or serious—the problem may actually be.
Inside Higher Ed reached out to journal editors, publishing companies, government agencies and research-misconduct sleuths to try to gauge the actual scale of plagiarism in dissertations and academic journal articles with U.S. authors—and why it sometimes goes undetected. We uncovered sharp disagreements not just over how common plagiarism is—some say it’s rampant, others that it’s rare—but how it’s defined and how equipped evolving digital tools are to combat it.
But we found one point of general agreement: the heavy pressure academics are under to keep publishing fuels the problem, however widespread it may be.
Seek and Ye Shall Find?
“We just don’t know how prevalent it is,” said Debora Weber-Wulff, a retired media and computing professor who’s been researching plagiarism since 2002. “All we know is when we look, we find something, and that means it must be pretty prevalent.”
Weber-Wulff said Ackman’s search would help show how much plagiarism is out there—if the search were done methodologically. She estimated he would need five to six people working for two to three years to analyze all MIT faculty members’ work in a “serious scientific manner.”
“An investigation like this would be unbelievably expensive, but I believe a billionaire has money, so that could turn his warfare into actual, good science,” Weber-Wulff said.
There are some data already, but they are limited. For over 13 years, Retraction Watch has been investigating plagiarism and other allegations of publishing misconduct, such as professors falsifying data or paying unscrupulous actors to place their names on papers they didn’t contribute to. Its public database, owned by the nonprofit Crossref, now includes 385 retractions, corrections or other notices for plagiarism-related issues in articles, book chapters or other items with U.S.-affiliated authors.
But Ivan Oransky, a Distinguished Journalist in Residence at New York University and a co-founder of the site, still doesn’t claim to know how much more plagiarism is out there. He believes it’s far more extensive than the database suggests.
“It’s clearly more common than anyone would like to think,” he said. “Whatever we’re seeing and whatever people are reporting on is clearly an undercount.”
“We need to stop being surprised when anyone—good-faith actor, bad-faith actor, nonfaith actor—finds plagiarism or other misconduct in the scholarly literature, full stop, and until we all ask just how common these problems are, we are going to be as screwed as we are at the moment,” Oransky said.
All we know is when we look, we find something, and that means it must be pretty prevalent.”
—Debora Weber-Wulff
But others aren’t so sure that plagiarism is prevalent—or, at least, as much of an issue as it once was.
Elisabeth Bik, a science consultant and former staff scientist at Stanford University’s School of Medicine, searches “biomedical literature for inappropriately duplicated or manipulated photographic images and plagiarized text,” according to her website—which says her work has resulted in “1,069 Retractions, 149 Expressions of Concern, and 1,008 Corrections” as of November. But that is a worldwide number. Among U.S. academics, she wrote in an email, plagiarism is now “pretty rare, to the best of my knowledge.”
“I have found plagiarism both in U.S. theses as well as in scientific papers myself, including some written by U.S. scholars,” Bik wrote. “However, most of these were written before 2010 or so. After that, plagiarism checking software became more commonly used, so I assume that more recent dissertations contain less.”
It may depend on the research field. John M. Budd, a University of Missouri professor emeritus who co-edits the Journal of Education for Library and Information Science, wrote in an email that he’s “conducted inquiries into retractions of published papers” in biomedical sciences, and in that field “plagiarism tends to be quite uncommon.”
“The problems are more rooted in the fabrication of data and error without blame falling on the researchers,” he said. “In the text- and narrative-heavy humanities and social sciences plagiarism—intentional or not—is a more serious problem.” It may not always be deliberate, he said, but “there can definitely be sloppiness on all parts (authors, dissertation committee members, journal editors and reviewers).”
Not everyone agrees that plagiarism is a bigger problem in the humanities or social sciences. Some don’t see it as much of an issue at all among U.S. academics. Jim Grossman, executive director of the American Historical Association, which publishes the American Historical Review journal and the Perspectives on History news magazine, said, “Plagiarism is not a frequent problem among professional historians.
“Usually, when someone has plagiarized, the person who reports it is the person who was plagiarized, that’s who knows, and the American Historical Review is sufficiently visible,” Grossman said. “If that has been rife, we would know.
“There is no epidemic of plagiarism,” he said.
There are also different degrees of plagiarism. “The definition of plagiarism is not straightforward,” and views differ, said Frits Rosendaal, one of two editors in chief of the Research Integrity and Peer Review journal.
There is no epidemic of plagiarism.”
—Jim Grossman, American Historical Association
The question of what constitutes “serious” plagiarism flared up with the allegations against Gay. Some of her alleged plagiarism occurred in a 2001 article in the American Political Science Review, a journal of the American Political Science Association. The Harvard Crimson student newspaper reported that, in one paragraph, Gay lifted phrases verbatim from a study by other authors. While she cited them at the top of her paragraph, she didn’t put quotation marks around their words, the Crimson reported.
The current co–lead editors of that journal, who said their editorial team has been in place since June 2020—long after Gay’s article was published—said they haven’t seen much serious plagiarism. They don’t consider Gay’s citation issues to fall into that category. They’ve only had “a couple of situations where a reviewer has identified what we would call plagiarism of ideas in an article that we’re considering,” said Julie Novkov, one of the co–lead editors and the dean of the Rockefeller College of Public Affairs and Policy at the University at Albany, part of the State University of New York.
Novkov, who’s also a professor of political science and women’s, gender and sexuality studies, said that “to my knowledge, we have not had a situation where we found” what she called the “kind of worst category of plagiarism: the clear plagiarism, unattributed use of large portions of text, data, etc.”
The other co–lead editor, Michelle Dion, who used to teach in the U.S. and is now a political science professor at Canada’s McMaster University, said Gay’s alleged plagiarism “was very minor and, in some regards, unsurprising, given that it was an article that grew out of her dissertation, which was done in the late ’90s, before a lot of the tools that we have now that would help us avoid those types of mistakes.
“That specific type of error, the magnitude of it, the amount of it, the nature of it, probably exists in a lot of research published particularly from that era because it was before we had databases of tools that would help eradicate that,” Dion said.
“I do think that some of these more minor instances are likely more unintentional, accidental, sloppy, research practices as opposed to malignant, intentional stealing of ideas,” Dion said. “To the extent that it’s a problem, it’s not a problem of bad actors.”
Not the Purpose of Peer Review?
Some academics said that peer review—the traditional process when journals invite scholars in the field of a submitted article to review it, suggest edits and consider whether it should be published at all—doesn’t involve much checking for plagiarism. They said peer reviewers don’t have enough time for that and aren’t expected to do it.
Mohammad Hosseini, an assistant professor at Northwestern University and an associate editor of the Accountability in Research journal, wrote in an email that peer reviewers “are volunteers and already under a lot of pressure. Expecting them to also check citations is not realistic.” He said it’s “also unreasonable to expect dissertation committees to check every single citation because dissertations might have hundreds.”
Mario Malički, the other editor in chief of Research Integrity and Peer Review and the associate director of the Stanford University Program on Research Rigor and Reproducibility, wrote in an email that, worldwide, there are “7 million scholarly papers being published a year, not counting books, blogs and other outputs. So to expect that a human can detect every instance of plagiarism is unrealistic.” Citing a paper he co-authored, he wrote that two reviewers, on average, scrutinize a paper before it’s published.
“And no matter how good those two are, the eyes of [a] dozen, hundreds or thousands or more readers afterwards will likely always be better,” Malički wrote. The “core essence” of peer review, he said, “pertains to evaluating the methods of research and results that came from those methods.”
Bik, the science consultant who ferrets out misconduct, said generally that “peer-reviewers or academic committees should not be held responsible for fraud detection … Plagiarism and other fraud detection is a task that should be done by paid staff at journals and publishers, or as a routine check by an administrative office at universities.”
To the extent that it’s a problem, it’s not a problem of bad actors.”
—Michelle Dion, American Political Science Review
More than 12,000 of the journals that publish academic papers are clustered among a handful of scientific publishers, according to a September 2022 article from the Journal of Documentation. These publishers—Springer Nature, Taylor & Francis, Elsevier, Wiley and SAGE—didn’t provide Inside Higher Ed much data, or in some cases any response, on plagiarism within their own journals.
Plagiarism cases “represent a very small percentage of our investigations” into misconduct, a Wiley spokesperson wrote in an email to Inside Higher Ed. A Taylor & Francis spokesman wrote, “Plagiarism is among the research integrity issues we see most regularly,” but “this is now far outweighed by cases of image and data manipulation.”
Kim Eggleton, head of peer review and research integrity at IOP (Institute of Physics) Publishing, wrote to Inside Higher Ed that the “primary onus to detect plagiarism is on the publisher and the in-house editors … The purpose of peer review is to evaluate the validity, point out any fundamental flaws in the research and provide constructive advice for the authors.”
The spokesman for Taylor & Francis, the large journal publisher, wrote in an email that identifying plagiarism isn’t “a role of the peer review process, although it does sometimes happen.” Instead, he said the company “invests considerable resources into checking for plagiarism as part of editorial assessment, usually prior to peer review.” The spokesman said “plagiarism-detection tools are available to all Taylor & Francis journals.”
Software: Sound or ‘Stupid’?
Multiple publishers told Inside Higher Ed that they run submitted articles through plagiarism-detection software, with a couple naming iThenticate as a main tool. Staff members can run those checks instead of peer reviewers, and potential issues the software flags can be passed on for a human check. But the value of the software is disputed. And not all journals can afford it.
“As an industry, we have evolved to incorporate plagiarism detection practices at the point of submission as standard in our journals, which detects text similarity to published work,” a Wiley spokesperson wrote. “This serves as a flag for journal editors and publishing staff to look more closely at those submissions and to help identify legitimate cases.”
An Elsevier spokesperson wrote that the publisher “uses iThenticate as the primary tool to check all manuscripts at submission for plagiarism” and trains “editors on how to interpret the results generated.” The spokesperson said there’s “added value in iThenticate as all other major publishers also use it and provide their content for it” and said Elsevier uses other “proprietary” processes but declined to share more information on them.
Boris Barbour with the PubPeer Foundation, which maintains the PubPeer “post-publication peer review” website, wrote in an email that there “are now quite systematic plagiarism checks at the better journals, so in good journals there are fewer problems with new papers.” Dion, one of the co–lead editors of the American Political Science Review, which uses iThenticate, said she thinks “our privilege gives us a perch where we’re more likely to see or find these things.”
Every incentive is aligned against correcting the record.”
—Ivan Oransky, Retraction Watch
But not all journals are so privileged. Malički, the Research Integrity and Peer Review editor, said that “with the current plagiarism detection softwares and post-publication peer review,” plagiarism “is often caught.” But, he said, “there are approximately 70,000 scholarly journals out there in the world, and not all can afford to check all papers with” the costly software.
Turnitin is the company behind iThenticate, whose website offers customized pricing based on organizations’ size and needs. Otherwise, it’s generally $100 for each manuscript of fewer than 25,000 words.
Not all assessments of plagiarism-detection software are rosy. Budd, the Mizzou professor emeritus and journal editor, wrote, “Automated systems designed to detect plagiarism are an imperfect solution at best … the system my publisher uses claims instances of plagiarism at times when the material is quoted and cited properly. This is a human challenge and does require a human solution.”
Weber-Wulff, the plagiarism researcher, said “the software is stupid” and “a human being has to go through.”
“Apparently nobody reads the doctoral dissertations,” she said. She’s seen some published with Wikipedia links still in them, she said.
And she said human checkers, like herself, are underfunded.
Cheryl Ball is executive director of the Council of Editors of Learned Journals, an association representing about 260 primarily “humanistic” journals. She said she’s also been editing the journal Kairos for nearly 25 years. She estimated that Kairos sees plagiarism in maybe one out of every 100 to 150 submissions, often from non-U.S. scholars who are under even more unrealistic publication pressures. However, she said her journal doesn’t use plagiarism detection software, due to issues it’s found with the technology.
“All of this discussion about plagiarism that’s going on right now is purely politically motivated, and to target academics in this is part of a right-wing agenda to undermine intellectualism and research, basically,” Ball said.
‘Publish or Perish’
The plagiarism problem, to whatever extent it exists, comes down to a fundamental issue, multiple interviewees said: the pressure on academics to keep publishing to keep their jobs and advance in their profession.
“The research-based evaluation and reward system is still largely focussed on the quantity of published research,” wrote IOP Publishing’s Eggleton. “This puts immense pressure on researchers to publish papers in order to secure funding or progress in their careers. Plagiarising other sources is one of many tactics used to achieve this.”
Oransky, the Retraction Watch co-founder, said, “Everything revolves around publishing papers in academia. It’s publish or perish—it’s the oldest slogan in academia, and to me it explains, if not quite all of it, the vast majority of it.”
One reason that plagiarism and other misconduct are not often exposed, he said, is that “every incentive is aligned against correcting the record.” Universities don’t want to threaten the “golden goose” of their funding by exposing professors. Publishers have their business models at stake. Researchers fear for their careers.
“The problem with all this is you’re going to let people who actually aren’t particularly interested in strengthening universities and other institutions end up writing the rules,” Oransky said. “That is exactly what happens when a profession, a set of institutions, fail to police themselves. It’s always what happens.
“The chickens are coming home to roost,” Oransky said, “and the fact that people with motivations that academics don’t like are the ones actually driving the chickens to the roost is completely predictable and could have been avoided.”