, 30 tweets, 14 min read Read on Twitter
One critical failure of (mis-applied) empirical / scientific thinking is over-reliance on evidence. Evidence is critical. It's just not enough - we can easily be misled by data.

Here's a a couple dozen tweets on how this happens, and what to do about it. #tweetstorm
tl;dr; In general, be wary of over-reliance on evidence, since it is inevitably conditioned on many assumptions or factors, some of which change.
That overreliance on data is especially bad with small n, or in complex domains, or in domains with non-predictive theory.
The obvious reductio-ad-absurdum for over-reliance on evidence is when n=0 for the case we care about.
This is basically @nntaleb's case of induction by turkeys - they weren't killed yet, so they are safe. In fact, even with very large n, all data is conditional, not absolute.
@nntaleb A recent blog post by @phl43, (via Robert Wiblin of @80000Hours), makes this point differently, but clearly: Because results are always conditional, data can only be conditionally informative. necpluribusimpar.net/why-falsificat…
@nntaleb @phl43 @80000Hours In that case, it's arguing against a simplistic version of Popper's falsificationism. If physicists see anomalous results falsifying a theory, they question the equipment, not the theory. The point is similar when looking at data from any study - randomized, or observational.
@phl43 As a side point, this is part of the explanation of why randomized trials are in many ways better evidence - the set of assumptions needed goes down, so the implication of the data is more general. (They can still be wrong, of course, but in slightly fewer ways.)
As another side point, empirical work in economics, psychology, or any other social science is really hard in part because all of our evidence - even from randomized trials - is conditioned on basically everything about the modern world.
When a study is done on almost any subject, all of the data, and therefore all of the conclusions, are premised on everything from the temperatures at the time of the study to the dietary history of the participants. Randomizing helps with a few issues, but not all of them.
For example, there are reasonable concerns that most social science studies use WEIRD (Western, Educated, Industrialized, Rich, And Democratic) subjects. This is the same problem noted earlier: even if they are randomly assigned, we're conditioning on the class of participants.
Similar issues exist everywhere. For example, for political science. Any possible study of what occurs makes implicit assumptions. Even if we could randomize (we can't,) the world where we want to apply conclusions is later in history than the world where the study happened.
And I certainly agree that when we see unexpected results we should sometimes accept them. Still, those custom officers were right to question instruments before making a conclusion. We should always double check that our assumptions hold. Often they do, sometimes they don't.
But back to our main point, small-n studies are criticized because they provide little evidence. It's better than nothing, but small-n means it's conditioned on not only the subjects, but also on a variety of other filters - publication bias, garden of forking paths, etc.
We're also cautioned by @slatestarcodex to beware the man of one study - slatestarcodex.com/2014/12/12/bew… This is a closely related point. Even if a study is large, randomized, and pre-registered, those facts can only address a subset of our concerns.
@slatestarcodex When data is gathered from subjects similar to those we want to generalize to, it helps. But even in randomized medical trials designed to show general safety and efficacy, @hvanspall noted that they usually exclude women, children, the elderly, etc. jamanetwork.com/journals/jama/…
OK, so all evidence is limited. What do we do now?
Part of the answer is just to be aware of those limitations - don't assume that "a study showed X" can imply X is true in the current context. Yes, the study showed it, but it showed it IN MICE, @justsaysinmice, or among a small subset of people, @JustSaysInWEIRD.
@justsaysinmice @JustSaysInWEIRD There is also a place for intellectual humility. We don't usually have absolute or perfect answers.

That means we should identify what the key uncertainties about our conclusions are. If possible, we should also find where further evidence would help, consider gathering it.
@justsaysinmice @JustSaysInWEIRD Another key part of the answer is to build theory, rather than focus on individual studies. If we understand exactly why a given treatment kills cancer in mice, we have a much better reason to consider whether it does the same in humans.
@justsaysinmice @JustSaysInWEIRD This is particularly useful in social science, where everything is changing. We need to build predictive theories that explain a phenomenon, acknowledging context, instead of justifying beliefs based on evidence that may not be relevant.
@justsaysinmice @JustSaysInWEIRD Another side point: we can and should have valid theories that are conditional.

Theory predicts that X% of height is genetic, conditioned on current distribution of population genetic variation, and on sufficient nutrition as a child, and on age, and on not having osteoporosis.
@justsaysinmice @JustSaysInWEIRD This type of conditional theorizing is great, and applies widely. In Economics, we have theoretical models for why international trade is widely beneficial, and the data supports the theory, but only conditional on a number of theoretical and empirical factors.
@justsaysinmice @JustSaysInWEIRD Even with minimal theory, data can allow empirical economics, for example, to derive price elasticity - but it is conditioned on the current regulatory schemes, trade laws, etc. It's useful, but those conditions are critical.
@justsaysinmice @JustSaysInWEIRD Further side point: many conditions have to do with identifying causality. That's critical, but I'm not talking about it now - it's a closely related point, but is complex for additional reasons.
@justsaysinmice @JustSaysInWEIRD Back to the main issue.

Without paying attention to the conditions, data can and will be taken to imply things incorrectly. If the conditions have changed, and we don't notice, we will often make sweeping but completely invalid conclusions.
@justsaysinmice @JustSaysInWEIRD Is screen time bad for *MY* children, *NOW*? Even if someone did an RCT in 2000, it might not apply now. Was it bad in 2000 because it displaced social time? Now socialization is largely online. Was it good in 2000 because it taught typing skills? Now, kids learn that in school.
@justsaysinmice @JustSaysInWEIRD If we have a theoretical understanding of why and how a result occurred, and, when the scientific method were actually followed, people have actively attempted to find alternative explanations or disprove the theory, we can have confidence about where it generalizes.
@justsaysinmice @JustSaysInWEIRD I was warned in graduate school that studies, data, and models don't make, or even imply, conclusions. This was drilled into my head by Paul Davis, my thesis advisor, among others. People - researchers - make conclusions based on studies. Hopefully they are careful.
@justsaysinmice @JustSaysInWEIRD Our conclusions should not - cannot - be based on data alone. They must be based on data, plus understanding of how the data was collected and interpreted, models that predict how and why the result occurred, and attempts to find alternatives to those models.
@justsaysinmice @JustSaysInWEIRD A valid empirical understanding of the world, a scientific understanding of the world, requires us to do more than point to a study. Yes, empirical data checks our theories and assumptions against the real world. That is absolutely necessary, but it's far from sufficient.
As usual, I'll just assume this blew up.
If you liked the tweetstorm, check out my other work!
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to David Manheim
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!