Profile picture
, 26 tweets, 6 min read Read on Twitter
This is almost certainly wrong, but it illustrates some of the real complexities involved with journalists interpreting scientific research 1/
The first thing I do when I see a headline like this is try to find a press release. It gives you a lot of insight into how the story came about

This is what I found medpagetoday.com/meetingcoverag…
The story appears to be from a study presented as an abstract at the American Gastroenterology Association's national conference

This is an immediate red flag
It's also worth noting that the title and lede of the press release are really, really bad

If you only read these two sentences, what message would you get???
So the story is from a poster presentation at a conference

What this means in practice:

- preliminary research
- not peer-reviewed
- likely to change before publication
- less likely to be correct
The story gets even murkier from here. If you look at the published manuscript, the odds ratio is only ~just~ significant, which means that this was quite a tenuous relationship
(Worth noting that there was no suggestion anywhere that this was a causal relationship until the press release, any speculation about the causes is totally hypothetical at this point)
What this tenuous relationship means, in practice, is that pet owners could have anywhere between a 1.8% or 57% increased odds of IBS, which is quite a wide range which verges on 0% increase at the bottom end
The authors helpfully include a forest plot for their study at the bottom

Take a look and see what you think
Having seen this plot, are you more or less confident in the statement that IBS is associated with pet ownership?
See, the thing is, that top study appears to be contributing the ENTIRE association. Every other study found no result at all, but one single study has caused the entire relationship to become statistically significant
Being the nerd I am, I decided to rerun the meta analysis on their sample using the metan command in Stata

This is a bit quick and dirty, but using a random-effects model with an inverse variance, I get these results
For the epi nerds, when I run it with a fixed-effects model my results are the same as those reported in the paper, but my random-effects model CI crosses 0 🤔
But now comes the interesting part - what happens if I take out that single paper that appears to be driving the result?

What do you reckon?
Here's the result. The association disappears completely

It looks like one study is driving all of these results
So what is this study?

Essentially, a simple observational survey of people in Singapore
Now, I'm not going to critique this piece of research in-depth, but I think it's worth noting that it only surveyed 300 people, of whom 80 had IBS

The other studies looked at a total of ~2,500 people
So what we're seeing in the meta-analysis is basically a series of negative results being totally overset by a single positive result

That is not great scientifically!
It's a bit like tossing a coin 5 times, getting 4 tails and 1 heads, and concluding that heads is the right answer
This is especially true when you consider that the p-value is 0.064, which means that these results aren't even ~technically~ significant in any model!
But bringing this back to #scicomm - how is a journalist meant to know this? It's complex stuff. Most scientists I know aren't comfortable re-running a meta-analysis to see what happens when you exclude studies
And the press release, let's remember, is astonishingly positive. No mention of the MASSIVE question mark remaining after this research, just "pet owners more likely to have IBS"
The real finding from this analysis is that there may be a very modest increase in risk of IBS from owning a pet, but this seems unlikely at present based on the totality of the evidence
Who do we blame for the misreporting?

I'll leave that to you

There are many steps along the way that could've corrected this, but none were taken
SMALL CORRECTION

The forest plot I included earlier in the analysis of the random-effects model was from the log-transformed variables (oops) here's the plot once exponentiated:
Also, the p-value is 0.064 for this model, which is technically not significant. The effect size is also different from that reported in the abstract, however if I run a fixed effects model everything is exactly the same so I suspect that's what was actually done here
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Health Nerd
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!