Spinning bad research

IMG_1855

There’s an appalling amount of bad research out there and more is added with the publication of new papers every month.

How can we tell good research from bad?

[pullquote align=”right” cite=”” link=”” color=”” class=”” size=””]Are the authors reporting the primary finding or have they replaced it with a secondary finding?[/pullquote]

One way is to look at whether the results hold up to statistical analysis: Are the findings statistically significant? Is the study large enough to have sufficient statistical power? What happens when you correct for confounding variables?

Those analyses require grounding in statistics. But there’s an even simpler way to tell good research from bad. Are the authors reporting the primary finding or have they replaced it with a secondary finding? In other words, have they engaged in “spin”?

A paper in this months edition of Obstetrics and Gynecology, entitled It’s All How You Spin It: Interpretive Bias in Research Findings in the Obstetrics and Gynecology Literature, explains:

Spin is a classic concept in fields such as marketing, journalism, and politics, where it is defined as a form of propaganda to influence public opinion. The concept of spin in the medical literature has been described as the manipulation of language to convince the reader of the likely truth of the result.

This is particular problem when the authors have undertaken a long, complicated investigation and arrived at results that are not statistically significant. There’s tremendous pressure to get some sort of publication out of the work.

One way to do that is to ignore the primary finding and look for a secondary finding that is statistically significant and present that instead. In fact, many such papers imply in the abstract that the secondary finding was what the authors were looking for.

What’s wrong with spinning research in this way?

Because many readers decide from the abstract whether to obtain further information from the full-text article, the authors evaluated the abstract for the following: 1) Was the primary outcome stated? 2) Was the effect size reported (ie, the sample size to discern the magnitude of the treatment effect)? 3) Was a precision estimate included (ie, confidence interval or P value)?

By promoting the secondary outcome in the abstract, the authors fail to acknowledge that there was no difference in the primary finding and thereby mislead readers as to the findicts. That’s spin.

There are many ways to spin research findings so that negative findings are presented as positive findings.

Three major types of spin strategies were identified that highlighted that the experimental treatment was beneficial despite a statistically nonsignificant difference for the primary outcome: 1) emphasizing statistically significant secondary results despite a nonsignificant primary outcome (such as within-group comparisons, secondary outcomes, or subgroup analysis); 2) interpreting statistically nonsignificant primary results as showing treatment equivalence or comparable effectiveness when the study was not designed to assess equivalence or noninferiority (such trials require specific design and larger sample size than superiority trials); and 3) emphasizing the beneficial effect of the treatment despite statistically nonsignificant results (eg, trending results).

Spin is disappointingly common:

the literature from other medical specialties such as oncology, anesthesiology, intensive care medicine, surgery, and psychiatry has noted rates of spin ranging from 59% to 66%.

It occurs less in the OB-GYN literature, but is still a big problem:

I reviewed a decade (January 2006 through December 2015) of the tables of contents of the journals Obstetrics & Gynecology and the American Journal of Obstetrics & Gynecology to identify RCTs. In this time period, there were 503 RCTs, of which, half (50%, n=251) noted a nonsignificant primary outcome (P≥.05).

Spin was employed in fully HALF of all OB-GYN RCTs. A substantial proportion of the spin occurred in the abstracts. Simply put, the abstracts misrepresented the findings of the study. That’s why reading the abstract is never enough and why journalists should never use the press release to report on findings of a study, but MUST read the entire paper.

It seems to me that spin is a particular problem in breastfeeding research. That’s why the bulk of breastfeeding research is weak and conflicting. It doesn’t reflect what the authors were attempting to prove, but rather an incidental finding that the authors choose to highlight while attempting to minimize the fact that they found the opposite of what they wanted.

For example, the authors might undertake to determine if breastfeeding increases IQ as determined by specialized testing. The results show that breastfeeding does not increase IQ. Like most negative findings, that’s unlikely to get published, so the authors search the subtest, find one with a statistically significant difference and declare that breastfeeding increases (for example) gross motor ability.

How does the average reader or journalist protect herself from research spin?

There’s are two threshold question that must always be asked: What were the authors attempting to find? And did they find it? If they didn’t find it, that what the headline should reflect. The fact that they were able to slice and dice their data to come up with a secondary finding that is statistically significant is ofree meaningless and should be reported as such.

6 Responses to “Spinning bad research”

  1. vanlankuyu
    February 3, 2017 at 10:01 am #

    Long time lurker here, finally moved to comment, because this topic is near and dear to my heart.

    There is a lot of pressure on scientists to come up with something that is publishable. I’m a PhD student in a STEM field, and my advisor got a major grant to fund my research. After three years of work on the project, the results were not significant. There’s pressure from the granting agency (if no publications come out of this project, my advisor is less likely to get funded in the future), and there’s pressure on me as an early career scientist (I only had 1 year of funding left, and didn’t have anything I could use for my dissertation). I didn’t feel comfortable publishing results that I knew were not significant (the study was not big enough to say there was no effect), so I’m struggling to finish my PhD, but without funding for my salary or research expenses, as my advisor has not been able to successfully get a grant since the initial one. Because of the way the system works, it’s a Catch-22, where integrity (not trying to spin results) is punished.

  2. Steph858
    January 31, 2017 at 9:41 am #

    More reasons to implement the pre-research register Ben Goldacre proposed in ‘Bad Science’. He recommended that a register of scientific studies to be undertaken in the near future be established; once the scheme got up and running, no reputable journal would publish the results of a study not registered before it commenced.

    His intentions were to combat publication bias, but it could also discourage spin: should the aforementioned register be implemented, the practice of researchers writing their own abstracts ought to become obsolete. All reputable journals will instead publish the text of the submissions the authors made to register their study, with perhaps a few minor edits if necessary. This should make it much easier for lay readers to spot when a paper’s authors are trying to slice and dice their data to draw out some – any – positive findings because the data collected failed to support the hypothesis the authors originally set out to prove.

    • Heidi_storage
      January 31, 2017 at 12:22 pm #

      They already do require this for RCTs. How would this work for, say, database studies?

  3. cookiebaker
    January 30, 2017 at 7:58 pm #

    “For example, the authors might undertake to determine if breastfeeding increases IQ as determined by specialized testing. The results show that breastfeeding does not increase IQ. Like most negative findings, that’s unlikely to get published, so the authors search the subtest, find one with a statistically significant difference and declare that breastfeeding increases (for example) gross motor ability.”

    This paragraph appears twice. Was that intentional?

  4. Heidi_storage
    January 30, 2017 at 2:22 pm #

    Oh, and I forgot: “Secondary analyses” love to present such isolated findings in subsequent papers. Yes, of course it can be valuable to do so, but in practice these articles often fall into the “salami slicing” category of publications.

  5. Heidi_storage
    January 30, 2017 at 2:21 pm #

    A reputable journal will, of course, attempt to keep the authors from overstating conclusions or presenting a negative study as a positive study, but as this commentary demonstrates it’s difficult to do. A scientist or doctor reader will, one hopes, be able to contextualize the results in a suitable, nuanced way, but the popular media reporting is generally not going to allow the lay public to do the same. (Even doctors aren’t always the subtlest readers; they’re not necessarily statisticians, and they are busy people.)

Leave a Reply

You must be logged in to post a comment.