His PLoS Medicine paper published in 2005 is the most accessed and downloaded
paper in the journal’s history (with approximately 1.5 million hits) and his
current citation-impact (exceeding 20,000 new citations in the scientific
literature every year) is among the highest of all scientists. John P.S. Ioannidis, M.D., D.Sc. the
author of November’s Cerebrum article, “Failure to
Replicate: Sound the Alarm,” discusses why most biomedical research papers (including
even many of the most influential ones) later turn out to be wrong or
exaggerated—and what can be done about it. The Q&A is also based on responses
from talks sponsored by PloS and Stanford
inspired you to study the way scientific studies are conducted?
As a researcher myself I could see on a
daily basis major challenges in my work on how to best design, conduct, report,
interpret, synthesize, and disseminate scientific findings. Bias and error were
very easy to creep in. I felt that instead of focusing on narrow questions, a
major issue was how to find ways to reduce biases and errors and make the
process of scientific investigation more efficient.
Were you surprised by the reaction to your first paper ten years ago,
“Why Most Published Research Findings are False?”
I was humbled that this work drew
so much attention from very different scientific fields, ranging not just from
biomedicine medicine but also psychological science, social science, and even
astrophysics and other more remote disciplines. Since then I have learned from
many people in biomedicine and beyond about their experiences—both empirical
and theoretical. They have provided insights about research practices and how
we can improve them. So it’s been a learning experience for me.
What were you trying to accomplish in your second updated paper in PLoS Medicine in 2014?
I was trying to move one step
forward and communicate how we can make more public research true; meaning how can
we probe and identify the best research practices that lead to the most
reliable results, and do that in a more efficient matter. There are lots of
ideas circulating in the scientific literature; some of them have been tested
more rigorously than others. There are lots of possibilities where we can see
improvements in the efficiency and credibility of scientific results.
I’m also trying to present
different options, such as improving peer review, transparency, limitations, eliminating
conflicts of interest, improving data sharing, thinking about using
registration of studies in various and appropriate settings, and how to align
the different stakeholders who are involved in the scientific process. Beyond
scientists there are stakeholders: universities, funders, journals and
publishers in the industry, and the general public who would be interested in
scientific results for different reasons.
The fact that science is so
difficult to conduct is really telling in terms of how important it is. Science
cannot be replaced, despite the denialists of HIV and climate change and those
who espouse creationism approaches. Their chances of being correct are
practically zero percent. We may get it wrong initially but hopefully, by using
the scientific method, we have ways to improve on our initial errors and try to
What is the stage in which most
studies get it wrong?
It’s a continuum. There are multiple phases and each phase has its waste
contribution. And there’s waste contribution even from selecting the study to
perform or the question that the study is asking. There are lots of studies
that have no rationale to be done. If someone says, “I’m going to do another
meta-analysis on statins for prevention of atrial fibrillation in cardiac
surgery”, I will point out that we have 20 already, why do we need a 21st. But
I’m sure one will probably be along in the next few months. There’s been a lot
of waste in design, conduct, analysis of the data, reporting of the data, and
in the post-publication interpretation. It’s not just one area.
What about the role of journals?
Journals can be a very influential stakeholder in the process because
they control much of the reward and censor system. So this means that they can
make things far worse or they can make things far better. One example where
they make things worse is where you have journals that are willing to accept
practically anything, hardly with any peer review. There was that hoax that was
published in Science; close to 300
journals were sent a paper that was a fake paper with all the errors that you
can imagine. Half of them accepted it and another 20 percent were still
They can do things much better. For example, registration of clinical
trials would not have been successful unless all the major clinical journals
had agreed that we’re not going to publish a clinical trial unless it is
registered. People have been talking about registration for ages. In the 1990s,
BMJ (British Medical Journals) had come up with a nice idea about a trial
amnesty, saying that if you haven’t published your results, we will forgive
you; come out and tell us what you found. Practically nobody came out to tell
what they found. But, when BMJ, JAMA, Lancet, the New England Journal of
Medicine, and all the other specialty journals agree not to publish unregistered
studies, this can matter and the same can apply also to other initiatives, such
as data sharing.
Would reforming how scientists are educated help?
Most scientists in biomedicine and other fields are
mostly studying subject matter topics; they learn about subject matter rather
than methods. I think that several institutions are slowly recognizing the need
to shift back to methods and how to make a scientist better equipped in study
design, understanding biases, in realizing the machinery of research rather
than the technical machinery.