In praise of "negative" results
It is unfortunate that a scientist is more likely to publish a "positive" result rather than a "negative" one. By "positive", I mean a result that demonstrates or supports a new hypothesis and by "negative", I mean a result that shows that a hypothesis is either wrong or incomplete. This tendency comes mainly from our human fallibility (we don't get known by publishing only "negative" results) and by the pressure on our own career (we don't hire somebody that never shows any "positive" result).
Why do I seem to insist on the importance of "negative" results? There are at least two reasons for this. The first reason is a practical one. Because "negative" results are not published and occur overwhelmingly more often than "positive" ones, it is likely that researchers are reproducing the same "negative" results over and over.
The second reason is a more fundamental one and has been advocated by the famous modern statistician R. Fischer (Gigerenzer et al. 1997). Scientists always deal with noise in the data. It is very rare to demonstrate with great certainty that a hypothesis is true from a single study because of the presence of noise. Publishing more "negative" results would enable the whole scientific community to get closer to the Truth by updating in one direction or another our belief that the hypothesis is true. This attitude is very much in line with a Bayesian approach according to which you update the probability that the hypothesis is true in face of new results. A "positive" result will increase this probability, a "negative" one would decrease it, and the process would end until we reach great certainty that the hypothesis is either true or not (in which case, the hypothesis may need to be modified).
I hope the scientific community will one day follow this approach.
Reference
Gigerenzer et al. 1997, The Empire of chance: How probability changed science and everyday life. University of Cambridge.
No comments:
Post a Comment