The Myth of Scientific Objectivity

Dr. Terence Kealey during a CrossFit Health event on March 9, 2019

“I think there is a vast myth that scientists are somehow objective and honest.”

~Dr. Terence Kealey~

Kealey is a former vice-chancellor of the University of Buckingham, a professor of clinical biochemistry, a scholar affiliated with the Cato Institute, and author of the book Breakfast Is a Dangerous Meal. During his presentation, he discussed the myth of scientific objectivity, drawing examples widely from history as well as his personal experiences within many of the most reputable scientific institutions.

Video published on 12 Aug 2019. Reference. Full transcript.

Spin in Published Biomedical Literature : A Systematic Review

Medical research publications : how to modify (abstracts) perception of results

Quinn Grundy presents original research that explores the nature and prevalence of spin in the biomedical literature. Video published on 11 Oct 2017. Reference.

Objective
To explore the nature and prevalence of spin in the biomedical literature.

Design
In a systematic review and meta-analysis, we searched MEDLINE, PreMEDLINE, Embase, Scopus, and handsearched reference lists for all articles published between 1946 and 24 November 2016 that included the quantitative measurement of spin in the biomedical literature for at least 1 outcome. Two independent coders extracted data on the characteristics of articles and included studies, methods for assessing spin, and all spin-related results. The data were heterogeneous; results were grouped inductively into outcome-related categories. We had sufficient data to use meta-analysis to analyze the association of industry sponsorship of research with the presence of spin.

Results
We identified 4219 articles after removing duplicates and included 35 articles that investigated spin: clinical trials (23/35, 66%); observational studies (7/35, 20%); diagnostic accuracy studies (2/35, 6%); and systematic reviews and meta-analyses (4/35, 11%), with some articles including multiple study designs. The nature and manifestations of spin varied according to study design. We grouped results into the following categories: prevalence of spin, level of spin, factors associated with spin, and effects of spin on readers’ interpretations. The highest, but also greatest variability in the prevalence of spin was present in trials (median, 57% of main texts containing spin; range, 19%-100% across 16 articles). Source of funding was hypothesized to be a factor associated with spin; however, the meta-analysis found no significant association, possibly owing to the heterogeneity of the 7 included articles.

Conclusions
Spin appears to be common in the biomedical literature, though this varies by study design, with the highest rates found in clinical trials. Spin manifests in diverse ways, which challenged investigators attempting to systematically identify and document instances of spin. Widening the investigation of factors contributing to spin from characteristics of individual authors or studies to the cultures and structures of research that may incentivize or deincentivize spin, would be instructive in developing strategies to mitigate its occurrence. Further research is also needed to assess the impact of spin on readers’ decision making. Editors and peer reviewers should be familiar with the prevalence and manifestations of spin in their area of research to ensure accurate interpretation and dissemination of research.

Identification of Spin in Clinical Studies Evaluating Biomarkers in Ovarian Cancer

Medical research publications : how to modify (abstracts) perception of results

Mona Ghannad presents a systematic review which documents and classifies spin or overinterpretation, as well as facilitators of spin, in recent clinical studies evaluating performance of biomarkers in ovarian cancer. Video published on 11 Oct 2017. Reference.

Objective
The objective of this systematic review was to document and classify spin or overinterpretation, as well as facilitators of spin, in recent clinical studies evaluating performance of biomarkers in ovarian cancer.

Design
We searched PubMed systematically for all studies published in 2015. Studies eligible for inclusion described 1 or more trial designs for identification and/or validation of prognostic, predictive, or diagnostic biomarkers in ovarian cancer. Reviews, animal studies, and cell line studies were excluded. All studies were screened by 2 reviewers. To document and characterize spin, we collected information on the quality of evidence supporting the study conclusions, linking the performance of the marker to outcomes claimed.

Results
In total, 1026 potentially eligible articles were retrieved by our search strategy, and 345 studies met all eligibility criteria and were included. The first 200 studies, when ranked according to publication date, will be included in our final analysis. Data extraction was done by one researcher and validated by a second. Specific information extracted and analyzed on study and journal characteristics, key information on the relevant evidence in methods, and reporting of conclusions claimed for the first 50 studies is provided here. Actual forms of spin and facilitators of spin were identified in studies trying to establish the performance of the discovered biomarker.

Actual forms of spin identified as shown (Table) were:

  1. other purposes of biomarker claimed not investigated (18 of 50 studies [36%]);
  2. incorrect presentation of results (15 of 50 studies [30%]);
  3. mismatch between the biomarker’s intended clinical application and population recruited (11 of 50 studies [22%]);
  4. mismatch between intended aim and conclusion (7 of 50 studies [14%]);
  5. and mismatch between abstract conclusion and results presented in the main text (6 of 50 studies [12%]).

Frequently observed facilitators of spin were:

  1. not clearly prespecifying a formal test of hypothesis (50 of 50 studies [100%]);
  2. not stating sample size calculations (50 of 50 studies [100%]);
  3. not prespecifying a positivity threshold of continuous biomarker (17 of 43 studies [40%]);
  4. not reporting imprecision or statistical test for data shown (ie, confidence intervals, P values) (12 of 50 studies [24%]);
  5. and selective reporting of significant findings between results for primary outcome reported in abstract and results reported in main text (9 of 50 studies [18%]).

Conclusions
Spin was frequently documented in abstracts, results, and conclusions of clinical studies evaluating performance of biomarkers in ovarian cancer. Inflated and selective reporting of biomarker performance may account for a considerable amount of waste in the biomarker discovery process. Strategies to curb exaggerated reporting are needed to improve the quality and credibility of published biomarker studies.

Systematic reviews with financial conflicts of interest linked to favourable conclusions and lower methodological quality

Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality

Abstract

Background
Financial conflicts of interest in systematic reviews (e.g. funding by drug or device companies or authors’ collaboration with such companies) may impact on how the reviews are conducted and reported.

Objectives
To investigate the degree to which financial conflicts of interest related to drug and device companies are associated with results, conclusions, and methodological quality of systematic reviews.

Search methods
We searched PubMed, Embase, and the Cochrane Methodology Register for studies published up to November 2016. We also read reference lists of included studies, searched grey literature sources, and Web of Science for studies citing the included studies.

Selection criteria
Eligible studies were studies that compared systematic reviews with and without financial conflicts of interest in order to investigate differences in results (estimated treatment effect and frequency of statistically favourable results), frequency of favourable conclusions, or measures of methodological quality of the review (e.g. as evaluated on the Oxman and Guyatt index).

Data collection and analysis
Two review authors independently determined the eligibility of studies, extracted data, and assessed risk of bias. We synthesised the results of each study relevant to each of our outcomes. For meta‐analyses, we used Mantel‐Haenszel random‐effects models to estimate risk ratios (RR) with 95% confidence intervals (CIs), with RR > 1 indicating that systematic reviews with financial conflicts of interest more frequently had statistically favourable results or favourable conclusions, and had lower methodological quality. When a quantitative synthesis was considered not meaningful, results from individual studies were summarised qualitatively.

Main results
Ten studies with a total of 995 systematic reviews of drug studies and 15 systematic reviews of device studies were included. We assessed two studies as low risk of bias and eight as high risk, primarily because of risk of confounding. The estimated treatment effect was not statistically significantly different for systematic reviews with and without financial conflicts of interest (Z‐score: 0.46, P value: 0.64; based on one study of 14 systematic reviews which had a matched design, comparing otherwise similar systematic reviews). We found no statistically significant difference in frequency of statistically favourable results for systematic reviews with and without financial conflicts of interest (RR: 0.84, 95% CI: 0.62 to 1.14; based on one study of 124 systematic reviews). An analysis adjusting for confounding due to methodological quality (i.e. score on the Oxman and Guyatt index) provided a similar result. Systematic reviews with financial conflicts of interest more often had favourable conclusions compared with systematic reviews without (RR: 1.98, 95% CI: 1.26 to 3.11; based on seven studies of 411 systematic reviews). Similar results were found in two studies with a matched design, which therefore had a reduced risk of confounding. Systematic reviews with financial conflicts of interest tended to have lower methodological quality compared with systematic reviews without financial conflicts of interest (RR for 11 dimensions of methodological quality spanned from 1.00 to 1.83). Similar results were found in analyses based on two studies with matched designs.

Authors’ conclusions
Systematic reviews with financial conflicts of interest more often have favourable conclusions and tend to have lower methodological quality than systematic reviews without financial conflicts of interest. However, it is uncertain whether financial conflicts of interest are associated with the results of systematic reviews. We suggest that patients, clinicians, developers of clinical guidelines, and planners of further research could primarily use systematic reviews without financial conflicts of interest. If only systematic reviews with financial conflicts of interest are available, we suggest that users read the review conclusions with skepticism, critically appraise the methods applied, and interpret the review results with caution.

Medical research publications : how to modify (abstracts) perception of negative (or non-significant) results into positive ones

Evaluation of Spin in the Abstracts of Emergency Medicine Randomized Controlled Trials

Spin is common (>40%) in emergency medicine randomized controlled trials…

May 2019 Study objective

We aim to investigate spin in emergency medicine abstracts, using a sample of randomized controlled trials from high-impact-factor journals with statistically nonsignificant primary endpoints.

Methods

This study investigated spin in abstracts of emergency medicine randomized controlled trials from emergency medicine literature, with studies from 2013 to 2017 from the top 5 emergency medicine journals and general medical journals. Investigators screened records for inclusion and extracted data for spin. We considered evidence of spin if trial authors focused on statistically significant results, interpreted statistically nonsignificant results as equivalent or noninferior, used favorable rhetoric in the interpretation of nonsignificant results, or claimed benefit of an intervention despite statistically nonsignificant results.

Results

Of 772 abstracts screened, 114 randomized controlled trials reported statistically nonsignificant primary endpoints. Spin was found in 50 of 114 abstracts (44.3%). Industry-funded trials were more likely to have evidence of spin in the abstract (unadjusted odds ratio 3.4; 95% confidence interval 1.1 to 11.9). In the abstracts’ results, evidence of spin was most often due to authors’ emphasizing a statistically significant subgroup analysis (n=9). In the abstracts’ conclusions, spin was most often due to authors’ claiming they accomplished an objective that was not a prespecified endpoint (n=14).

Conclusion

Spin was prevalent in the selected randomized controlled trial, emergency medicine abstracts. Authors most commonly incorporated spin into their reports by focusing on statistically significant results for secondary outcomes or subgroup analyses when the primary outcome was statistically nonsignificant. Spin was more common in studies that had some component of industry funding.

Élise Lucet sur le glyphosate

Interview Brut, Janvier 2019

Envoyé spécial a consacré une soirée spéciale au glyphosate le jeudi 17 janvier à 21h sur France 2

Atteint d’un cancer incurable, l’Américain Dewayne Johnson a attaqué en justice Monsanto. C’est le premier au monde à avoir gagné un procès contre le géant américain. Elise Lucet raconte toute l’histoire pour Brut.

The corruption of science by the industry (agrochemical)

The Monsanto Papers: Poisoning the scientific well

Monsanto flooded scientific journals with ghostwritten articles and interfered in the scientific process in order to defend its glyphosate herbicides

GMWatch reports, 14 August 2018.

The research article The Monsanto Papers: Poisoning the scientific well is a useful peer-reviewed source detailing Monsanto’s – corporate science – deceptive activities aimed at defending glyphosate herbicide, as revealed in the company’s internal documents force-disclosed in US cancer litigation and obtained by US Right to Know in freedom of information requests.

What is it about?

Examination of de-classified Monsanto documents from litigation in order to expose the impact of the company’s efforts to influence the reporting of scientific studies related to the safety of the herbicide, glyphosate

Why is it important?

The use of third-party academics in the corporate defense of glyhphosate reveals that this practice extends beyond the corruption of medicine and persists in spite of efforts to enforce transparency in industry manipulation.

Abstract

OBJECTIVE
Examination of de-classified Monsanto documents from litigation in order to expose the impact of the company’s efforts to influence the reporting of scientific studies related to the safety of the herbicide, glyphosate.

METHODS
A set of 141 recently de-classified documents, made public during the course of pending toxic tort litigation, In Re Roundup Products Liability Litigation were examined.

RESULTS
The documents reveal Monsanto-sponsored ghostwriting of articles published in toxicology journals and the lay media, interference in the peer review process, behind-the-scenes influence on retraction and the creation of a so-called academic website as a front for the defense of Monsanto products.

CONCLUSION
The use of third-party academics in the corporate defense of glyhphosate reveals that this practice extends beyond the corruption of medicine and persists in spite of efforts to enforce transparency in industry manipulation.

Medical research often ignores relevant existing evidence and patient needs

Why comparisons must address genuine uncertainties

The design of treatment research often reflects commercial and academic interests; ignores relevant existing evidence; uses comparison treatments known in advance to be inferior; and ignores needs of users of research results (patients, health professionals and others).

A good deal of research is done even when there are no genuine uncertainties. Researchers who fail to conduct systematic reviews of past tests of treatments before embarking on further studies sometimes don’t recognise (or choose to ignore the fact) that uncertainties about treatment effects have already been convincingly addressed. This means that people participating in research are sometimes denied treatment that could help them, or given treatment likely to harm them.

When researchers continue to embark on (same) research for decades without reviewing existing evidence systematically.

The diagram that accompanies this and the following paragraph shows the accumulation of evidence from fair tests done to assess whether antibiotics (compared with inactive placebos) reduce the risk of post-operative death in people having bowel surgery (Lau et al. 1995). The first fair test was reported in 1969. The results of this small study left uncertainty about whether antibiotics were useful – the horizontal line representing the results spans the vertical line that separates favourable from unfavourable effects of antibiotics. Quite properly, this uncertainty was addressed in further tests in the early 1970s.

As the evidence accumulated, however, it became clear by the mid-1970s that antibiotics reduce the risk of death after surgery (the horizontal line falls clearly on the side of the vertical line favouring treatment). Yet researchers continued to do studies through to the late 1980s. Half the patients who received placebos in these later studies were thus denied a form of care which had been shown to reduce their risk of dying after their operations. How could this have happened? It was probably because researchers continued to embark on research without reviewing existing evidence systematically. This behaviour remains all too common in the research community, partly because some of the incentives in the world of research – commercial and academic – do not put the interests of patients first (Chalmers 2000).

Patients and participants in research can also suffer because researchers have not systematically reviewed relevant evidence from animal research before beginning to test treatments in humans. A Dutch team reviewed the experience of over 7000 patients who had participated in tests of a new calcium-blocking drug given to people experiencing a stroke. They found no evidence to support its increasing use in practice (Horn and Limburg 2001). This made them wonder about the quality and findings of the animal research that had led to the research on patients. Their review of the animal studies revealed that these had never suggested that the drug would be useful in humans (Horn et al. 2001).

The most common reason that research does not address genuine uncertainties is that researchers simply have not been sufficiently disciplined to review relevant existing evidence systematically before embarking on new studies. Sometimes there are more sinister reasons, however. Researchers may be aware of existing evidence, but they want to design studies to ensure that their own research will yield favourable results for particular treatments. Usually, but not always, this is for commercial reasons (Djulbegovic et al. 2000; Sackett and Oxman 2003; Chalmers and Glasziou 2009Macleod et al. 2014). These studies are deliberately designed to be unfair tests of treatments. This can be done by withholding a comparison treatment known to help patients (as in the example given above), or giving comparison treatments in inappropriately low doses (so that they don’t work so well), or in inappropriately high doses (so that they have more unwanted side effects) (Mann and Djulbegovic 2012). It can also result from following up patients for too short a time (and missing delayed effects of treatments), and by using outcome measures (‘surrogates’) that have little or no correlation with the outcomes that matter to patients.

It may be surprising to readers of this essay that the research ethics committees established during recent decades to ensure that research is ethical have done so little to influence this research malpractice. Most such committees have let down the people they should have been protecting because they have not required researchers and sponsors seeking approval for new tests to have reviewed existing evidence systematically (Savulescu et al. 1996; Chalmers 2002). The failure of research ethics committees to protect patients and the public efficiently in this way emphasizes the importance of improving general knowledge about the characteristics of fair tests of medical treatments.

This is a reprint from The James Lind Library 2.1 Why comparisons must address genuine uncertainties.

The Glyphosate Saga : Press conference, 27 September 2017

The Monsanto Papers : proof of scientific falsification

Video published on 18 Oct 2017 by Greens EFA.

Speakers:
Michèle RIVASI, Greens/EFA MEP
Kathryn FORGIE, Attorney / Avocate, Cabinet Andrus Wagstaff
Carey GILLAM, journalist, Research Director U.S. Right to Know

The Monsanto Papers, secret internal documents, have now been made public thanks to over 10,000 farmers who have taken Monsanto to court, accusing the company’s glyphosate weedkillers of causing them to develop a cancer called non-Hodgkins Lymphoma.

The documents reveal the various strategies and tactics used by Monsanto to ensure that they can sell their star product, RoundUp, despite the clear dangers for humans and for the environment.

Alternatives to pesticides

Bad strategies, unethical tactics used by the pesticide industry

This 2017 trailer highlights some of Monsanto’s tricks

Video published on 10 Oct 2017 by Greens EFA.

The Monsanto Papers are secret, internal documents that have now been made public thanks to over 10,000 farmers who have taken Monsanto to court, accusing the company’s glyphosate weedkillers of causing them to develop a cancer called non-Hodgkins Lymphoma.

The documents reveal the various strategies and tactics used by Monsanto to ensure that they can sell their star product, RoundUp, despite the clear dangers for humans and for the environment.

Alternatives to pesticides