Spinning bad research

IMG_1855

There’s an appalling amount of bad research out there and more is added with the publication of new papers every month.

How can we tell good research from bad?

[pullquote align=”right” cite=”” link=”” color=”” class=”” size=””]Are the authors reporting the primary finding or have they replaced it with a secondary finding?[/pullquote]

One way is to look at whether the results hold up to statistical analysis: Are the findings statistically significant? Is the study large enough to have sufficient statistical power? What happens when you correct for confounding variables?

Those analyses require grounding in statistics. But there’s an even simpler way to tell good research from bad. Are the authors reporting the primary finding or have they replaced it with a secondary finding? In other words, have they engaged in “spin”?

A paper in this months edition of Obstetrics and Gynecology, entitled It’s All How You Spin It: Interpretive Bias in Research Findings in the Obstetrics and Gynecology Literature, explains:

Spin is a classic concept in fields such as marketing, journalism, and politics, where it is defined as a form of propaganda to influence public opinion. The concept of spin in the medical literature has been described as the manipulation of language to convince the reader of the likely truth of the result.

This is particular problem when the authors have undertaken a long, complicated investigation and arrived at results that are not statistically significant. There’s tremendous pressure to get some sort of publication out of the work.

One way to do that is to ignore the primary finding and look for a secondary finding that is statistically significant and present that instead. In fact, many such papers imply in the abstract that the secondary finding was what the authors were looking for.

What’s wrong with spinning research in this way?

Because many readers decide from the abstract whether to obtain further information from the full-text article, the authors evaluated the abstract for the following: 1) Was the primary outcome stated? 2) Was the effect size reported (ie, the sample size to discern the magnitude of the treatment effect)? 3) Was a precision estimate included (ie, confidence interval or P value)?

By promoting the secondary outcome in the abstract, the authors fail to acknowledge that there was no difference in the primary finding and thereby mislead readers as to the findicts. That’s spin.

There are many ways to spin research findings so that negative findings are presented as positive findings.

Three major types of spin strategies were identified that highlighted that the experimental treatment was beneficial despite a statistically nonsignificant difference for the primary outcome: 1) emphasizing statistically significant secondary results despite a nonsignificant primary outcome (such as within-group comparisons, secondary outcomes, or subgroup analysis); 2) interpreting statistically nonsignificant primary results as showing treatment equivalence or comparable effectiveness when the study was not designed to assess equivalence or noninferiority (such trials require specific design and larger sample size than superiority trials); and 3) emphasizing the beneficial effect of the treatment despite statistically nonsignificant results (eg, trending results).

Spin is disappointingly common:

the literature from other medical specialties such as oncology, anesthesiology, intensive care medicine, surgery, and psychiatry has noted rates of spin ranging from 59% to 66%.

It occurs less in the OB-GYN literature, but is still a big problem:

I reviewed a decade (January 2006 through December 2015) of the tables of contents of the journals Obstetrics & Gynecology and the American Journal of Obstetrics & Gynecology to identify RCTs. In this time period, there were 503 RCTs, of which, half (50%, n=251) noted a nonsignificant primary outcome (P≥.05).

Spin was employed in fully HALF of all OB-GYN RCTs. A substantial proportion of the spin occurred in the abstracts. Simply put, the abstracts misrepresented the findings of the study. That’s why reading the abstract is never enough and why journalists should never use the press release to report on findings of a study, but MUST read the entire paper.

It seems to me that spin is a particular problem in breastfeeding research. That’s why the bulk of breastfeeding research is weak and conflicting. It doesn’t reflect what the authors were attempting to prove, but rather an incidental finding that the authors choose to highlight while attempting to minimize the fact that they found the opposite of what they wanted.

For example, the authors might undertake to determine if breastfeeding increases IQ as determined by specialized testing. The results show that breastfeeding does not increase IQ. Like most negative findings, that’s unlikely to get published, so the authors search the subtest, find one with a statistically significant difference and declare that breastfeeding increases (for example) gross motor ability.

How does the average reader or journalist protect herself from research spin?

There’s are two threshold question that must always be asked: What were the authors attempting to find? And did they find it? If they didn’t find it, that what the headline should reflect. The fact that they were able to slice and dice their data to come up with a secondary finding that is statistically significant is ofree meaningless and should be reported as such.