The scientific world of today is ever brimming with hundreds of research papers and articles being published by the day, and a multitude of new discoveries bringing joy to researchers all around the world along with a creeping feeling of competitiveness that just itches them to tweak the experiments performed and look for a completely different set of startling new results. That could well be the road to a successful publication in a respected journal, ultimately pushing up their rank amongst their peers in the community. This can be broadly described as the motivation of research work, at least in the present times, apart from the obvious fact of the joy of deciphering the unknown in the world of science and the search for answers our fundamentally curious brain so desires.
But things have started to go astray. These former motivations have started to completely shadow the latter, intrinsically important ones with clinical and laboratory researchers focusing more on the speed of the results than the accuracy. A report by The Guardian on the same last week highlights the spread of the culture of ‘Bad Science’ through these practices. The basic problem here is what one can call the Replication Crisis.
An important quality of any crucial experiment is the reproducibility of that particular experiment in any lab situated in any other corner of the world, by other independent researchers using the same, or similar, equipment. This establishes the credibility of this experiment which could further be used as a base for performing another set of experiments. This is an essential part of what is called ‘The Scientific Method’.
However, over the last decade several concerns regarding the un-reproducibility of work have cropped up diluting the magnitude of the unfortunate experiments in the crosshairs of such claims. This is in fact what the above term stands for. A formal definition of the same can be quoted as: “The Replication Crisis refers to a methodological crisis in science in which scientists have found that the results of many scientific experiments are difficult or impossible to replicate on subsequent investigation by either independent researchers or the original researchers themselves.”
Now, replication problems exist in almost all fields of science, but this crisis stands out particularly in the case of Psychology and Medicine.
About 20 years ago, two psychologists Roy Baumiester and Dianne Tice, a married couple conducted the “chocolate chip cookie experiment” that went on to get cited over 3000 times. The experimentally basically provided two groups of participants with chocolate chip cookies and a couple of red radishes respectively. They were told to eat their respective samples and then solve a puzzle. The study found out that the cookie group kept working for a longer time on the puzzle than the other group, essentially bringing out the differences in the standing of their willpower under the set of conditions. This led to the coining of term “Ego Depletion” that said that our limited amount of willpower decreases over time with overuse.
In the following years another researcher, Martin Hagger set out to find the truth in this theory and performed a series of tests that prove the claim, along with revelation of more triggers to this phenomenon. But then came the downturn. A paper published this year disproved the claim of this theory backed with an experimental setup, 2000 subject and two dozen lab strong.
Another study claimed that out of a 100 replications of psychology experiments, only 40% turned out to be successful. Further experiments by concerned researchers brought out ambiguities, sometimes completely turned to fallacies, in this theory. Baumiester himself, blamed the automation of things for the failures. He felt that, “In the olden days there was a craft to running an experiment. You worked with people, and got them into the right psychological state and then measured the consequences. There’s a wish now to have everything be automated so it can be done quickly and easily online.” While this statement might not seem to explain the results, there is a hint of truth in it pointing as to why exactly this crisis is occurring.
Another area that is plagued by this disease is the medicine field. Here the crisis has far more dire implications than psychology. Take cancer research for example. Last summer Leonard P.Freeman, a scientist published a paper that showed that sloppy data analysis, contaminated lab materials and poor experimental design contributed to the problem. He found out that some cancer did not just fail to find a cure, but they did not offer any useful data at all resulting in a waste of more than $28 billion. When a cancer study ends up at the wrong conclusion, people die and suffer, and a multibillion-dollar industry of treatment loses money.
Another instance of this is in 2012 when a cancer research expert Glenn Begley tried to replicate the findings of 53 landmark experiments. He and his team of scientists found out that only 6 of the experiments came out as positive.
Authors at the Bayer note that a rule of thumb exists among venture capitalists that at least half of published studies, even those from the very best journals, will not work out the same when conducted in an industrial lab.
Brain Nosek, a psychologist and advocate for replication research, brings up an interesting analogy. Even a behavioural study of local undergraduate volunteers may require subtle calibrations, careful delivery of instructions, and attention to seemingly trivial factors such as the time of day.
This again hints at the fact that patience and attention to detail is required when performing experiments.
With such an extensive crisis going on, there must be an explanation to this or at least the outline of one. Some valid reasons pointing as to the occurrence of this can be:
- The falsification of the results. There was such a case in Harvard once, where a popular professor was found to falsify a number of her results. A possible corollary here could be the non-sophistication of the equipment used and the dearth of careful practices used before performing an experiment.
- The small size of the sample. Here statistical errors creep in owing to the same. The results of a study on 50 people cannot be extrapolated to conform to that of 500 people.
- The change in circumstances. A study conducted about half a century ago cannot be expected to still hold true if conducted today.
- The quality of the replication which is prone to scientist-error.
While these are just probable reasons, the fact remains that this race to publish quickly in high-end journals is definitely diluting the quality of the experiment. As The Guardian put it in their report, “laboratory chiefs who publish most frequently in high profile journals will attract more funding and produce more “progeny” (graduate students), who will eventually run labs of their own, potentially taking bad scientific habits with them.” This is in fact what is called as the spread of bad science. The sooner an awareness is created that can impact more number of people, the faster this crisis can be curbed.