Share This Page
Sound the Alarm: Fraud in Neuroscience
We expect scientists to follow a code of honor and conduct and to report their research honestly and accurately, but so-called scientific misconduct, which includes plagiarism, faked data, and altered images, has led to a tenfold increase in the number of retractions over the past decade. Among the reasons for this troubling upsurge is increased competition for journal placement, grant money, and prestigious appointments. The solutions are not easy, but reform and greater vigilance is needed.
Do scientists cheat? If so, what are their motives? Are there just a few occasional offenders, or does the recent spate of scientific misconduct cases represent the tip of an iceberg? How much of a problem is subconscious cheating? How can we police ourselves so that fraud decreases, even as the pressures on scientists grow?
These are questions I ask myself as the chief editor of an international journal, the chair of a neurobiology department, and the principle investigator of a systems neuroscience laboratory.
I’ve been fortunate to sidestep any problems with fraud in my own department and laboratory, but in the world outside my own, I have witnessed several seemingly undeniable cases of fraud. I also have seen a larger number of cases that seemed like fraud but could have been errors, plenty of cases that aroused suspicion, and many examples of simple errors that were not fraud but resembled it. Finally, I have received accusations of misconduct that turned out, upon investigation, to be misunderstandings or pure inventions. Even though cheating seems to be a growing problem in science, we need to be cautious about proclaiming someone guilty of fraud before all the evidence is available.
In my opinion, it is likely that the field of neuroscience is detecting only the tip of the fraud iceberg. Even though most scientists conduct their research impeccably, there is more misconduct than journal editors and the scientific community detect. This is mainly because cheating can be difficult to uncover.
Recent news stories in scientific and lay press highlight an elevated number of high-profile findings that have proved difficult for others to replicate, numerous retractions of published papers, and a few widely-known examples of blatant misconduct. Of course, failure to replicate another scientist’s findings can result from sloppiness or divergent techniques, and retraction can occur because of an honest error. I want to believe that all the results from my field—indeed, from my own laboratory—are genuine, and that suspicious situations come from honest errors. I trust my colleagues, students, postdocs, and research staff implicitly, and I believe that they are genuinely interested in the truth. But how can I or anyone else know for sure?
The Nature of Fraud
Some kinds of fraud cause great concern. Some scientists create data that support their hypotheses, and others adjust data so that the results are statistically significant or are just cleaner and more compelling.
Some scientists appropriate others’ ideas or borrow promising approaches after seeing them in a communication at a meeting, or in a grant or paper they have been asked to review. In science, ideas tend to evolve in parallel, at the same times, in the minds of different groups within a subfield, and it can be difficult to assign ownership. But I know of situations where a senior scientist appears to have stolen the ideas of a young scientist and used the ideas to his or her own advantage. I am sure it happens more often than meets the eye, even though successful senior scientists should have enough ideas of their own and not need to steal them.
I also often see misconduct that we as a field must label as wrong but that probably results from misunderstanding, frustration, and communication failures. We can hope to reduce disagreements about authorship after a paper has been submitted or published via training in the responsible conduct of science and the trend of identifying the contribution of each author in a footnote. Proper training also should alert scientists to the prohibitions against reproducing published material without prior permission and against submitting a paper to two journals at the same time. Scientists can minimize these kinds of misconduct by better educating our young scientists about the rules of acceptable ethical conduct.
True, plagiarism is fraud. But most plagiarism occurs because an author comes from a culture where copying is a sincere form of flattery, or because an author simply doesn’t understand the importance of rephrasing an idea in one’s own words. As an editor, I take a generous view toward minor examples of plagiarism that I detect during review, and I advise authors to rewrite in their own words. Also, I do not regard as plagiarism the reuse of one’s own words from a previous paper to describe routine methods. Still, I know that others do, and I prefer to see the material rewritten in brief form with a reference to the more authoritative description elsewhere.
Detection of Fraud
One reason for the proliferation of fraud is that it’s easy to hide and difficult to detect. In my role as an editor, instances of potential misconduct generally come to me quite late in the review process. Accusations usually occur in the form of an e-mailed complaint from someone who is closer to the situation and/or more expert than I in the particular field. I estimate that I receive complaints about scientific misconduct for about 2 percent of submissions. I believe that the rate is even higher in some other journals.
Fraud can go unnoticed through much of the research process because so much of science is solitary, and we are frequently alone with our data. Laboratories can operate quite independent of the world around them. For many institutions, critical procedures and analyses are performed in shared research cores that are run by individual departments or by the institution for the common good, rather than by an individual laboratory. There may be reduced, even inadequate, oversight of these shared research cores. It would be easy to miss sleight of hand from someone who seems to have so-called golden hands. We should always remember the adage, “If it seems too good to be true, then it probably is too good to be true.”
For reasons that I do not quite understand, data that are simply made up can escape the review process and not come to light until a paper appears on a journal’s website and is read by an expert. Why do our reviewers fail to recognize a single image that reappears in multiple figures to show different results, or error bars that are all the same length across an entire graph? Could we create a rigorous process that would detect this kind of occurrence, determine whether it results from an honest error or from malicious misconduct, and correct it?
Fastidiously kept notebooks and experimental logs are measures that should enable data provenance and help to prevent data fraud, but cheaters will always find ways to cheat in spite of structure designed to prevent cheating. I realize it would be a big change of culture, but data fraud would be reduced—and the quality of the entire scientific literature improved—if we established requirements for publishing not just your paper, but also your data. Modern technology would allow authors to catalog and store data in a way that tracks all changes to the record, and to link the data to the figure derived from them. We only have to decide that this is the direction we need to go in.
I also wonder why authorship complaints frequently arise very late in the publication process. How can a principal investigator shepherd a paper all the way to publication without anyone along the way noticing that the paper is based on the work of a former Ph.D. student who is not listed as a co-author? In contrast, duplicate submissions often come to light very quickly. When authors submit a paper to two journals at the same time, scientists solicited to review the paper are very quick to point out that they were asked to review a seemingly identical paper for another journal.
It is probably true that the more serious the fraud, the harder it is to detect. As a result, we should be careful not to punish the minor offenses too harshly, or pursue them too vigorously, when the really serious ones may be going unpunished.
My wish is that cases of potential fraud or scientific misconduct in my department or laboratory come to my attention quite early, presumably in the form of a report from an innocent bystander who notices something that doesn’t seem right. To aid this process, and to reduce fears of retaliation, I have placed an anonymous whistle-blower form on my department’s website. It is part of my job as department chair and principal investigator to understand these situations, to confront them, and to resolve them. I imagine that other chairs see their jobs the same way.
Incentives for Fraud
Scientists cheat because fraud can be rewarding. Generally speaking, scientists understand that the goal of research is to learn about the truths of the world, and most scientists would not cheat simply to achieve greater rewards. But the evidence indicates that some scientists do.
I think that fraud has increased since I came into scientific research 40 years ago, as the challenges of running a successful research laboratory, obtaining funding, and publishing papers likewise have increased. In the not-so-recent past, we did not have cutthroat competition to publish in the most prestigious journals as we do today, and grant funding flowed freely. There was enough reward to go around. The life of a scientist was relatively simple, so there were fewer incentives to cheat. While we cannot “rerun the tape” (credit to the late Stephen J. Gould), I suspect that my own career path would have been a hundred times more competitive and stressful now than it was back then.
So what can we do to return to how things once were? The rewards of science rise out of publications, but simply publishing does not guarantee success. We are increasingly judged according to where we publish rather than what we publish. Remarkably, we are ranked in proportion to the number of citations garnered by the other papers in the journals that contain our papers (the impact factor). Some organizations decide promotions and grant applications on the basis of the impact factors of the journals that publish a scientist’s papers; rewards come from being published in the journals with the highest impact factors. As a result, the perception of the need for this kind of reward runs strong and deep.
A journal acquires a high impact factor by following a very selective review process. Because it publishes only a small fraction of the papers it receives, publication becomes highly competitive. Yes, the papers in that journal truly are better, on average, than are papers in other journals. But the field elevates the perception of the papers in the journal to such an extent that publication in that journal becomes a very high reward. Some people will stop at nothing—including cheating—to produce a paper that is exciting enough and seemingly reliable enough for such an elevated publication status.
The situation becomes more dangerous when young people start to believe—as they often do—that they can expect to get a postdoc position or a job only after they publish at least a couple of papers in the highest-ranking journals. Sadly, this perception has a kernel of truth to it. If an institution advertises for a faculty job and receives 350 applications, then it is only natural to screen for people whose work has been published in the very top journals. That kind of search might overlook someone of depth, creativity, and substance who values discoveries themselves more than the world’s perception of the discoveries.
I do not think that the high-profile journals or their editors are at fault. They are doing their jobs.
We, the scientists, are at fault. We need to change how we evaluate our colleagues. We need to read their papers from cover to cover and teach our students to do so as well. We need to reject the idea that riffling through a table of contents and reading abstracts constitutes “keeping up with the literature.” We need to judge scientists and their papers by how much they have truly pushed their field forward rather than by where the papers appear. We need to celebrate the excellent science that appears in specialist journals as much as we do the papers in high-profile publications. In recruiting, we need to look at the substance of an applicant’s publications rather than measuring the reputations of the journals that published the papers.
And what about grant money? Doesn’t the pursuit of research funding provide potential rewards for cheating? It absolutely does. A recent high-profile misconduct case allegedly involved falsification of preliminary data for a grant application. I doubt this is the first (or the last) time such misconduct has occurred. It would be glib and simple to say that the incentive of potential grant money could be reduced (but not removed) if pay lines were much better at the National Institutes of Health.
An Insidious Form of Fraud
As the chief editor of a journal, I find it especially challenging to evaluate one type of subtle cheating. When a reviewer is evaluating a paper for publication, there is an opportunity for conscious or, worse, subconscious bias in the review. If the paper has been submitted to a high-profile journal, the reviewer might be thinking, “This paper could take my paper’s place in this journal. I need to find reasons to make sure that doesn’t happen.” It is possible to delay a competitor simply by taking a long time to complete and submit a review. It is easy for reviewers to delay publication or to push a paper down in the journal food chain by asking for more experiments. The advent of “supplementary material” has made it easier for reviewers to ask for more data. Supplementary material is not part of the actual paper but consists of additional figures, graphics, and tables that are maintained on a journal’s website and are available only on the internet. I advocate limiting supplementary material to formats such as audio and video clips that cannot easily be embedded in a PDF, thus obviating this particular device now available to reviewers. Supplementary figures and text have been disallowed at least in Neuroscience and the Journal of Neuroscience, and limited in some other journals. People have complained, of course, but the quality of the papers published by these journals has not suffered.
As an editor, I always wonder whether a reviewer’s request for more experiments is asking for a fundamental piece of the story, or requiring the authors to go way beyond any reasonable scope for the paper they have presented. As the chair of a study section in the past, I frequently had to remind my reviewers that they were to evaluate the grant they had been given, not try to rewrite it. As editors and reviewers, I think we need to try to evaluate the paper we’ve been given, not try to transform it. I do not know whether this problem will be resolved by new journals that promise an efficient review process and do not make requests for additional experiments that expand the scope of the paper, but it is an experiment worth trying.
In regard to subconscious bias, I suggest that there may be a ton of subconscious cheating going on in how experiments are done, how data are selected and analyzed, what is and is not told to the Principal Investigator of the lab, and how the message of a paper is massaged. My colleagues who are psychiatrists assure me that the subconscious exists and that it plays a key role in our actions. Subconscious misconduct is one of the most serious issues we face, and I suspect there is much more of it than there is of conscious, intentional cheating.
What Can We Do?
There are parallels between the situation with scientific fraud today and the situation with animal use (and abuse) 30 years ago: clear examples of inappropriate treatment of animals then and of scientific fraud now; an unknown number of undetected abuses then and the likelihood of more than a few undetected cases of fraud now; many instances of substandard treatment of animals then and of possibly inappropriate scientific conduct now; and an overwhelming majority of scientists who were using animal subjects appropriately then and who are conducting science impeccably now. Unfortunately, data fraud may be more difficult to spot than is animal abuse.
To reduce scientific fraud, perhaps we can learn some lessons from the scientific community’s responses to the actions of animal activist organizations in the 1980s and 1990s. Those organizations, through actions of questionable appropriateness and (sometimes) legality, mobilized the scientific community to police itself. As a result, we now have detailed protocols that outline procedures to ensure the welfare of our animal subjects. We follow the protocols, and there is a structure for regulatory oversight that has largely (but not completely) eliminated abuses. Adherence to regulations, laws, and protocol is a key part of the ethics training of young scientists. We have anonymous whistle-blower avenues for anyone who has a concern. And the issue is discussed openly at venues as large as the annual conferences of our major professional societies and as small as individual lab meetings. While the process has been unpleasant at times, our treatment of animal subjects has improved dramatically.
In the realm of scientific fraud, there are now activist groups such as Retraction Watch and Clare Francis, as well as software that helps detect plagiarism. They have raised awareness of the issue for some of us. Thus, the first step is to broaden awareness of the issues of fraud so that the discussion occurs in all institutions and all laboratories, not just in those that have been touched by an incident of serious scientific misconduct.
Another strategy is to publicize cases of fraud ourselves rather than leaving it to outside activists. Perhaps we need a stronger and more visible regulatory structure to detect fraud earlier in the steps required to complete and publish a research project.
Journals and institutions alike are stakeholders. Both stand to lose if fraud continues, and both should play proactive roles in detecting and thereby removing the incentives for fraud. We, as scientists, should reduce our admiration of “high-profile” publications and evaluate scientists for jobs, promotions, grants, and tenure on the basis of what they have done rather than where their research has been published.
Finally, we should talk about misconduct more often and more deeply. Subconscious and conscious misconduct needs to be discussed in lab meetings, faculty meetings, ethics courses, and national meetings. By putting fraud under the light and developing a strong structure for its detection, we can reduce it dramatically, even if we will never be able to eliminate it altogether. And we need to remember that although fraud may be more prevalent than we think, most scientists conduct their research irreproachably. As always, we need to be careful not to assume fraud has occurred just because there’s been an accusation. Investigation often reveals that an error, a misunderstanding, or nothing at all has occurred.