Share This Page

Managing Conflicting Interests in Medical Journal Publishing
Editors of scientific journals are the gatekeepers for much of what we know about science and medicine; what appears in their pages often turns up later in drug advertising, advocacy group fundraising, patients’ Internet search results and science news in the daily media. As editors, they must ensure the accuracy and novelty of the information they disseminate. Cerebrum invited the editors of a top scientific journal, Annals of Neurology, to reveal their perspective on the difficulties of dealing with the conflicting interests of authors, the pharmaceutical industry, and journals themselves, all while maintaining the quality of the information that reaches the public.
“Where do we obtain our facts as well as our theories? Both are being published daily in the medical journals we read. . . . Who decides what we read? The editors.”1
—Jan P. Vandenbroucke, former editor in chief, the Lancet
Ever since the Journal des Savants began disseminating the Philosophical Transactions of the Royal Society of London, the first academic journal, in 1665, scholarly journals have been an essential engine driving scientific inquiry, investigators’ careers, industry profits and the agendas of nonprofit organizations. Most scientists work in highly competitive, hierarchical environments in which creative output is the currency of trade and an individual’s reputation and rank are determined by productivity as measured by the number of manuscripts one publishes. The placement of work in top-ranking journals has become increasingly essential to enhance the visibility of scientists’ discoveries, promote acceptance by colleagues, attract trainees to their laboratories, secure grant funding, and ensure job security. As editors of a clinical neurosciences journal, Annals of Neurology, we find that the urgency for investigators to publish in top journals places great pressure on journal editors, who not only serve as arbiters of quality and taste but must also successfully align the different interests of investigators, peer reviewers, and the journal itself with the interests of the readership at large.
Determining Novelty
One challenge we face as editors is to ensure that the submitted articles are reporting on new results. This involves policing duplicate publications and determining exactly what constitutes duplication. It is by no means an easy task; the National Library of Medicine’s medical literature and retrieval system (MEDLINE) database currently indexes more than 18 million records published in 5,246 medical journals. In 1969, Franz Joseph Ingelfinger (1910–1980), editor in chief of the New England Journal of Medicine, established what became known as the “Ingelfinger rule,” essentially stating that the journal would not consider a manuscript for publication if it had previously been published elsewhere.2 Although some have claimed that the rule has had a stifling effect on the dissemination of scientific research,3,4 it is now standard policy for nearly every medical journal, across all specialties. The distinction between the justifiable practice of splitting data from a major study for publication in multiple journals catering to different specialties and a clear violation of the Ingelfinger rule is often ambiguous and difficult to determine, as it depends largely on the degree of data overlap.
As one Annals of Neurology reviewer mentioned to us recently, some of our more prolific colleagues do seem to be disseminating their work in “the smallest publishable units.” Such instances of apparent duplication are relatively benign compared to republication. In fact, considering that each and every citation may add value in today’s difficult funding climate, it is perhaps surprising that splitting data is not a more common practice. Other examples of potentially “benign” duplication might be in the areas of review articles based upon a scientist’s body of work, interim clinical trial reports, or abstracts. Sadly, we believe that some contributors have exceeded any reasonable threshold of benign duplication, crossing the boundary of auto-plagiarism, copying entire paragraphs from prior reports. In our experience, most violations involve the submission of similar review articles to different journals. The proliferation of electronic tools at our disposal to discover and organize content in the medical literature notwithstanding, it remains a challenge for journal editors to discover when the Ingelfinger rule has been broken.
The PubMed search engine remains one of the best tools for the task of detecting potential duplicate publications. In addition to peer review, which we believe is a very good means of identifying novelty and importance, we have adopted a policy at the Annals that no manuscript is accepted without first confirming the study’s novelty by conducting a MEDLINE search to look for publications or pre-existing knowledge that belies the novelty of the paper under review. We also use the Google Scholar search engine extensively to screen for prior use of key phrases or paragraphs in a submission under review. On more than one occasion we have faced the unpleasant task of notifying an author that our search revealed that identical content had already been published elsewhere.
The Computational Biology Group at the University of Texas Southwestern Medical Center at Dallas has developed another tool, called eTBLAST, specifically designed to identify similar articles in the biomedical literature by combing through abstracts of published articles. A user enters a string of text, and the program searches for similarities in the published literature. A recent study published in Nature using the eTBLAST technology estimated that of the approximately 18 million citations in the MEDLINE database, nearly 200,000 are duplications—most of them instances of auto-plagiarism.5
Unfortunately, neither PubMed nor eTBLAST is capable of detecting auto-plagiarized work that has not yet been published. Remarkably, even in 2008, journal editors remain critically dependent on the honesty and integrity of authors and peer reviewers to protect against plagiarism and other forms of misconduct.
Many investigators also feel pressure to present findings to scientific audiences before their work is published in a peer-reviewed journal, both to scoop competitors and to obtain greater acknowledgment of their work. Drug or device manufacturers are also interested in disseminating positive trial results as quickly as possible, and publicly traded companies have a legal responsibility to immediately notify investors of any news that might materially affect the value of their holdings. Authors often ask us whether presenting data at a scientific meeting constitutes duplication in the literature and precludes publication in the Annals. If not, to what degree does public discussion affect a paper’s novelty? Such questions are never easy to answer.
The question of whether it is acceptable to discuss data in a public forum before results are published in a peer-reviewed journal has generated a good deal of confusion. For the record, our position is that scientists are allowed to discuss their data before it is published, and in many instances they have a moral responsibility to do so. Rapid presentation of clinical trial results is clearly in the best interests of patients. From an editor’s perspective, the impact of a paper may be diminished if the findings are already widely known; however, a paper is considered novel as long as the findings have not been previously published in the primary peer-reviewed literature.
Limitations of Peer Review
Peer review is meant to protect readers and editors from the biases of the investigators submitting their manuscripts, but peer review also introduces problems of its own, including potential conflicts of interest. One of the more common complaints that we hear around the Annals editorial office is that peer review is too slow. Authors often grumble about the possibility of data theft from competitors who may be serving as opportunistic peer reviewers in order to purposely delay the review process while they rapidly prepare and submit similar data of their own to another journal. One author explicitly requested that we not send a submitted paper to “any expert in the field,” for fear that the referee might steal the data. Although the vast majority of such complaints can be explained by anxiety and eagerness to publish, it is not entirely uncommon for a peer reviewer to take weeks and sometimes months to complete the assigned review. In such instances, we sometimes wonder if some of our most concerned authors might not be overly suspicious after all. No system can be perfectly fair, and the process of peer review is no exception.
The biases of peer reviewers are largely unpredictable and can occur in either direction. A reviewer can stymie a competitor by delaying review, embrace a validation of his or her own work by providing a laudatory evaluation with scant criticism, or suppress dissent by harshly critiquing a paper that presents evidence contradictory to that of the reviewer’s work—no matter how convincing or rigorous the data.
Last year, for example, the Annals received a paper that described a successful animal study of RNA interference as a potential therapy for a neurodegenerative disease. The paper was sent out for peer review, and shortly thereafter one of the reviewers contacted the editors to point out that the reviewer had presented data at a scientific conference that were suspiciously similar to those in the paper, and that these data (the reviewer’s) were currently in press with another journal. Her data and those in the submitted paper differed in only one detail. After pointing out this highly remarkable coincidence, the reviewer spent 35 days reviewing the paper—she protracted her review intentionally to ensure that her own paper went to press first. To her credit, she told us openly that she was doing so.
Regrettably, there is little that editors can do in such instances without compromising the anonymity of the reviewer or accusing the author of appropriating an idea without much credible evidence to support the accusation. When the data are solid and the paper is favorably reviewed, the editors are obligated to publish.
In another example, the Annals published a paper describing a new animal model for a different neurodegenerative disease. The reviewers found that the study was well executed, but within weeks of the article’s publication, we received information from other scientists that the work possibly was flawed. Several months later, we published a series of papers from authors at various institutions around the globe, most reporting a failure to replicate the original work. Although this process was painful, it highlighted for us a responsibility that we believe all journal editors must assume: as a general rule, we will publish any credible report that calls into question content previously published in our journal.
A final example: We recently published the discovery of a possible virus linked to multiple sclerosis. In doing so, we were aware of similar research in the past that had failed to identify this common virus (varicella, known to cause shingles and chicken pox). In this new report, however, modern molecular methods were used to search for the virus, and the data as presented were so clear-cut and definitive that the findings could not be dismissed. Because we were highly skeptical, given the many earlier reports of virus isolations in multiple sclerosis that were not reproducible, we insisted on a blind replication using coded samples sent from another source. The authors performed this replication and confirmed their initial finding. Although we continued to suspect that the findings were unlikely to be true, we decided to publish, with an accompanying cautious and somewhat skeptical editorial. Not surprisingly, we are currently reviewing a paper from another lab that failed to replicate.
Carl Sagan’s famous axiom that extraordinary claims require extraordinary proof would seem to be a useful rule for all journal editors. In a larger sense, J. P. Ioannidis and others have raised the concern that many claims in the biomedical literature are not reproducible.6 Human studies that involve large-scale genetic, proteomic, or immunologic investigations have the particular risk of type 1 errors (statistical errors better known as “false positives”). Because of that, we have adopted a firm editorial policy requiring confirmation of such studies with an independent data set.
Despite our best efforts to guarantee scientific rigor and reproducibility, to temper over-interpretation of data, to tone down overstated conclusions, and to sober overzealous authors, certain controversies are simply unavoidable, and the possibility of bias will always exist. The pressure to publish can be great indeed, and when the journal’s considered disposition on a paper is unfavorable, authors work hard to convince us otherwise. In the final analysis, we are responsible for setting the bar at an appropriate level.
Embargo and the Lay Media
Embargoes are another frequent source of ambiguity and conflict. Many of our authors’ institutions and funding bodies are eager to disseminate press releases when an article is accepted for publication; however, the policy of most journals is that the press release is subject to an embargo so that the primary peer-reviewed literature is able to publish first. Part of the reason for embargo is to protect the journal’s revenue; once the essential findings are available in the public domain, the original literature has less monetary value. More important, however, is that the primary literature should publish first so that data are available to the public before the lay media are allowed to discuss results from a scientific study.
Once an embargo is lifted, media coverage often falls prey to inaccuracy and errors of omission. A 1991 New England Journal of Medicine study of lay media coverage of three medications used to prevent major diseases revealed that the media had inconsistently and inaccurately reported the benefits and hazards associated with the medications.7 In 2003, the Canadian Medical Association Journal published a similar analysis of 193 newspapers covering five new drugs, revealing that 100 percent of the articles reported at least one therapeutic benefit, but 68 percent failed to report any harm.8 Both studies found that financial conflicts of interest between the pharmaceutical industry and key expert sources cited in the reports were almost never disclosed.
Financial Conflicts of Interest
Disclosure of financial conflicts of interest is an absolute requirement in medical journals in order to diminish the potential role of industry incentives. Everyone in the academic community is aware of numerous instances of high-profile investigators who failed to disclose significant conflicts of interest, leading to personal and professional embarrassment and, for some, representing serious violations of the law. Most of these lapses in judgment concern failures to fully disclose financial relationships with pharmaceutical or device manufacturers, or with financial firms.
Ghostwriting, writing by someone who is not listed as an author, is one common example of an unhealthy relationship with a commercial sponsor. At the Annals, we continue to receive manuscripts that appear to have been ghostwritten as part of a marketing agenda, perhaps by a drug-company employee, often with the purpose of encouraging physicians to prescribe drugs or devices for uses that are not included in the product’s labeling. This “off-label” prescribing is legal, but pharmaceutical companies are not legally allowed to promote off-label use through advertising.
Whenever we suspect that a manuscript reflects in any way the hidden agenda of a commercial entity, we reject it—even when the authors (usually distinguished senior investigators at more than one institution) lobby for a reversal of the verdict. Finally, it is important to recognize that relationships between industry and investigators are not always financial ones, and that non-financial entanglements—perhaps involving access to an emerging technology or leadership in a clinical trial—may create very powerful relationships that can lead to bias and often are not formally disclosed.9
As editors, our most inviolable charge is to ensure that the content of our journal is of the highest possible quality, that we always operate in the public’s best interest, and that we never give readers cause to question our independence. Conflicts of interest can appear in many forms, both obvious and subtle. Only by recognizing that conflicts of interest will always exist, that human beings are fallible, and that scientific inquiry is never free from potential bias can we make certain that such conflicts are recognized and managed in a consistent manner and under the full light of disclosure.
A generation ago, the science historian Derek de Solla Price wrote that 80 to 90 percent of all scientists who have ever lived are currently alive,10 a situation that likely remains true today. Modern scientific journals occupy a position of sacred trust in this crowded world as they endeavor to assess new discoveries and disseminate novel findings to the community at large. As are scientists, journals too are extremely competitive with each other as each vies for supremacy in its respective area of inquiry. For journal editors, juggling the competing demands, deadlines, and pressures of modern scientific publishing while remaining free from bias, fair to authors, attentive to quality, and true to the highest ethical standards is a daunting day-to-day challenge.
References
1. J. P. Vandenbroucke, “Medical Journals and the Shaping of Medical Knowledge,” Lancet 352 (1998): 2001–2006.
2. J. Toy, “The Ingelfinger Rule: Franz Ingelfinger at the NEJM 1967–77,” Science Editor 25 (2002): 195–198.
3. L. K. Altman, “The Ingelfinger Rule, Embargoes, and Journal Peer Review—Part 1,” Lancet 347 (1996): 1382–1386.
4. L. K. Altman, “The Ingelfinger Rule, Embargoes, and Journal Peer Review—Part 2,” Lancet 347 (1996): 1459–1463.
5. M. Errami and H. Garner, “A Tale of Two Citations,” Nature 451 (2008): 397–399.
6. J. P. Ioannidis, “Why Most Published Research Findings Are False,” PLoS Med 2005;doi:10.1371/journal.pmed.0020124.
7. D. P. Phillips, E. J. Kanter, B. Bednarczyk et al., “Importance of the Lay Press in the Transmission of Medical Knowledge to the Scientific Community,” New England Journal of Medicine 325 (1991): 1180–1183.
8. A. Cassels, M. A. Hughes, C. Cole, B. Mintzes, J. Lexchin, and J. P. McCormack, “Drugs in the News: An Analysis of Canadian Newspaper Coverage of New Prescription Drugs,” Canadian Medical Association Journal 168 (2003): 1133–1137.
9. S. L. Hauser and S. C. Johnston, “Of Ghosts and Sirens: The Subtlest Lures of Industry,” Annals of Neurology 61 (2007): A11–A12.
10. D. de Solla, Little Science, Big Science…and Beyond (New York: Columbia University Press, 1963).