Carlos OrsiWriter, journalist and author of the scientific popularization books “O Livro dos Milagres” (Vieira & Lent, 2011), “Pura Picaretagem” (LeYa, 2013) and “O Livro da Astrologia” (Kindle Direct Publishing, 2015)

Media, marketing and the avalanche of false positive scientific tests

authorship

Anyone interested in science and health issues and exposed to major media – newspapers, magazines, social networks – is certainly used to browsing endless paragraphs about plants, molecules and various materials presented as “promises” of cure and relief. for the most varied conditions, based on experiments carried out in culture media or mice. It is not uncommon for some enterprising figure to appropriate the headline and the “health benefits, according to studies” become a powerful marketing factor in favor of some semi-known fruit or food supplement.

Few people perhaps stop to wonder why, after being announced, most of these wonderful discoveries never seem to get beyond the informal tip stage, heard on a morning broadcast TV show, and reach what should be the next logical stage, that of treatment. duly approved doctor.  

This is often attributed to the natural and necessary delays in the clinical validation process, but behind this explanation there is another, much more prosaic explanation: most of the initial studies reported as “promising discoveries” were simply wrong.

The above statement does not represent an exercise in iconoclasm, but a conclusion based on the already voluminous literature available on false positives in preclinical and clinical research. This area of ​​interest received a special boost with the publication, in PLoS Medicine, in 2005, from the anthological paper “Why Most Published Research Findings Are False,” by physician John Ioannidis, which drew attention to, among other things, statistical procedures inadequate and the use of very small samples.

Since then, the concern has deepened in Medicine and spread to other areas. In 2011, the journal Psychological Science brought the article “False Positive Psychology”, drawing attention to what the authors, Joseph P. Simmons, Leif D. Nelson and Uri Simonsohn, call “researcher degrees of freedom” – series of decisions , often informal and innocent at first glance, which, taken throughout the research process, end up biasing the result. “Flexibility in data collection, analysis and reporting dramatically increases the true rate of false positives,” they warned. In 2015, an article in Science pointed out that less than half of a set of 100 important experiments in the literature in Psychology had proven reproducible.

The reproducibility criterion – any researcher who uses the same methods and materials equivalent to those of the original study must, in principle, reach the same results – serves, among other things, as a kind of check on the degree of expertise and competence of the research author initial. Crucially, without reproducibility there is no application: medicines are only reliable because it can be predicted, with some certainty, that their effects will be reproduced in a reasonably homogeneous way in the population of patients for whom they are indicated.

Three years before the Science announcing the so-called “reproducibility crisis in Psychology”, two important oncology researchers, C. Glenn Begley and Lee M. Ellis, published an opinion piece in Nature lamenting the high rate of failure in clinical tests of new treatments for cancer and pointing out, among those mainly responsible, “the quality of pre-clinical data”, those obtained in cells and animals. “The scientific community assumes that the claims of a preclinical study can be accepted without reservation – that although there may be minor errors of detail, the main message of the paper deserves confidence and that the data, for the most part, will survive to the test of time. Unfortunately, this is not always true”, lament the authors.

In January, Ioannidis and other researchers, such as psychologist Eric-Jan Wagenmakers, published in Nature human behavior, the play “A Manifesto for Reproducible Science”, in which they draw attention to the pitfalls that exist on the path between the formulation of a hypothesis and the publication of a truly valid conclusion, including the various blunders that lead human beings to lie to themselves, such as apophenia, the tendency to see patterns where there is only chaos; confirmation bias, the tendency to pay attention to a mere fraction of the available information (the fraction that appears to confirm our preconceptions); and rear-view bias, the tendency to consider certain sequences of events “obvious” or “predictable” – but only after they have come to fruition.

A few weeks ago, Ioannidis returned to charge, this time in the medical journal JAMA, with the opinion article “Acknowledging and Overcoming Nonreproducibility in Basic and Preclinical Research”. The first paragraph deserves to be quoted in full:

“The evidence that there is non-reproducibility in fundamental and preclinical research is compelling. Accumulated data from diverse subdisciplines and experiment types suggest numerous problems that can create fertile ground for non-reproducibility. For example, most protocols and raw data are often not available for in-depth analysis or use by other scientists. The current incentive system rewards selective publication of success stories. There is inappropriate use of statistical methods, and study design is often less than optimal. Simple laboratory errors – for example, contamination or misidentification of common cell lines – occur with some regularity.”

The inappropriate use of statistical methods in science causes alarm among statisticians. A year ago, the American Statistical Association published a warning about the abuse of the p-value as a criterion for scientific discovery. This is not new. Other warnings present in several of the articles cited, such as psychological biases and vices brought about by an academic system that encourages mechanical productivity, also do not come from recent discoveries. This is all very clear.

What seems much less clear is the impact that these criticisms and warnings have had on academic practice, on postgraduate programs, on thesis committees, on the evaluations of funding agencies, both at a global, national or even local level. .

Initiatives such as pre-registration of experiments, which limit the researcher's degrees of freedom to “change their mind” in the middle of a study and encourage the dissemination of negative results, are still little publicized. On a day-to-day basis, the career wheel continues to turn, driven almost exclusively by the dubious observation that “p < 0,05”.
 

REFERENCES:

Why Most Published Research Findings Are False
(http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124)

Why Most Clinical Research Is Not Useful
(http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002049)

Estimating the reproducibility of psychological science
(http://science.sciencemag.org/content/349/6251/aac4716 )

Drug development: Raise standards for preclinical cancer research
(http://www.nature.com/nature/journal/v483/n7391/full/483531a.html )

An investigation of the false discovery rate and the misinterpretation of p-values
(http://rsos.royalsocietypublishing.org/content/1/3/140216 )

False-Positive Psychology
(http://journals.sagepub.com/doi/full/10.1177/0956797611417632 )

The ASA's Statement on p-Values: Context, Process, and Purpose
(http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108)

A manifesto for reproducible science
(http://www.nature.com/articles/s41562-016-0021 )

Preregistration Challenge (https://cos.io/prereg/ )

 

 

twitter_icofacebook_ico