Discover and read the best of Twitter Threads about #CausalTwitter

Most recents (4)

Excited to have @mpiccininni3 speaking at the @turinginst causal inference interest group about whether cognitive screening tests should be corrected for age and education

#CIIG #EpiTwitter #CausalTwitter Image
Marco explains it is fairly standard, when performing cognitive screening tests, to 'correct' (or standardise) the result for demographic characteristics (e.g. age and level of education). The resulting score tells you someone's result for people of similar age & education Image
'Correcting' the cognitive score for age and education is therefore equivalent to ignoring the part of the cognitive test score that is due to age and education.
So an older (or less educated) person needs to score lower on the raw test to achieve the same 'corrected' result. Image
Read 8 tweets
DoWhy 0.9 includes some exciting new extensions and features, including better sensitivity analyses, new identification algorithms, and more. I'm particularly excited to see so many new contributors joining in on this release!

#causality #causalinf #causaltwitter
Ezequiel Smucler (@hazqiyal) adds an identification algorithm to find optimal backdoor adjustment sets that yield estimators with smallest asymptotic variance
pywhy.org/dowhy/v0.9/exa…
Jeffrey Gleason adds e-value sensitivity analysis pywhy.org/dowhy/v0.9.1/e…

and Anusha0409 adds sensitivity analysis for non-parametric estimators: pywhy.org/dowhy/v0.9.1/e…
Read 8 tweets
🚨 New blog post (a series!)
An Illustrated Guide to Targeted Maximum Likelihood Estimation 🎯

Part 1: motivation for “targeting” an estimand for inference and why we can & should incorporate data-adaptive/machine learning models

🧵1/
khstats.com/blog/tmle/tuto… #causaltwitter
Part 2: step-by-step explanations of the TMLE algorithm for a binary exposure and outcome using words, equations, #rstats code, and colored boxes™️

This corresponds to a printable TMLE “cheat sheet”
khstats.com/blog/tmle/tuto…
2/
Part 3: The statistical properties of TMLE 🧚🏽‍♂️

Double robustness! Efficiency! Explanations plus a brief outline of why TMLE works & lots of references to learn more if you‘d like.

khstats.com/blog/tmle/tuto…
3/
Read 5 tweets
Tweetorial on going from regression to estimating causal effects with machine learning.

I get a lot of questions from students regarding how to think about this *conceptually*, so this is a beginner-friendly #causaltwitter high-level overview with additional references. Hand-drawn graphic of a regression formula E(Y|T,X)=\beta_0+
One thing to keep in mind is that a traditional parametric regression is estimating a conditional mean E(Y|T,X).

The bias—variance tradeoff is for that conditional mean, not the coefficients in front of T and X. Hand-drawn graphic of a regression formula E(Y|T,X)=\beta_0+
The next step to think about conceptually is that this conditional mean E(Y|T,X) can be estimated with other tools. Yes, standard parametric regression, but also machine learning tools like random forests.

It’s OK if this is big conceptual leap for you! It is for many people! Hand-drawn graphic of the conditional mean E(Y|T,X) with red
Read 13 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!