I have been ruminating lately on the idea of publication bias and
some of its negative consequences.
For those of you who are not familiar with the term,
publication bias refers to the fact that studies finding statistically
significant results are published more frequently than those finding no
significance. A term was coined back in 1979 for this: the “file drawer
effect”. This term refers to the fact that many studies are conducted, but will
never be reported due to non-significant results (1). These studies languish in
the proverbial file drawer, never seeing the light of day, un-publishable within
the current model.
If researcher A goes through the effort of conducting a
given study evaluating the differences between drugs X and Y, and the null
hypothesis is upheld (i.e. there is no difference between drug X and drug Y),
then why spend all of the time and money going through the effort of writing up
a manuscript to have it rejected? In the modern world of academic science, the
age-old mantra of “publish or perish” still holds true. For his/her career to
progress he needs to produce a steady supply of manuscripts in a timely manner.
In my opinion, it is not that people do not want to publish manuscripts with
null results; it is that there is very limited real estate in journals.
Positive results (i.e. significant results) are, flat out,
sexy. I mean, what is more appealing, reading that drug A does not cure
“incurable disease” or that drug B cures “incurable disease”? Obviously, drug B
makes a better story.
If the journal’s only purpose were to keep us abreast of the
current developments in a given field, then this situation would be fine. This
is the function of tabloids. That is not their sole purpose. Journals are
supposed to act as the repository of emerging scientific knowledge, sexy or
dull.
Anyone that has tried to learn anything new knows we often
learn the most from our mistakes. There is value in not just reading about
proven solutions, but also in awareness of the various ideas that were not
successful.
If you take this idea out to its logical conclusion, you can
see a situation in which there are several hypothetical researchers working on
the same problem. Let’s pretend their intention is to find a cure for breast
cancer (since there really are many separate scientists working on this right
now). The different labs are all working as hard as they can, trying different
solutions independently. Yet, many of their failed attempts at cures will not
make it into the journals for others to see. This means that others out there are
making the same mistakes, completely unaware of the failed attempts of their
peers.
This is very inefficient, costly, and dangerous.
A better solution would be an open science approach, where
all attempts at science were published in a way allowing loose collaboration
between labs. One lab could use the failed experiences of another lab, expediting
the process and introducing a level of efficiency that doesn’t currently exist.
In this model, there would be no more reinventing of the
wheel; reading that somebody in Kansas, Russia, or Nigeria has already attempted
your failed idea would allow you to move your limited resources on to the next
step.
No comments:
Post a Comment