40,225 research outputs found
Probabilistic Matching: Causal Inference under Measurement Errors
The abundance of data produced daily from large variety of sources has
boosted the need of novel approaches on causal inference analysis from
observational data. Observational data often contain noisy or missing entries.
Moreover, causal inference studies may require unobserved high-level
information which needs to be inferred from other observed attributes. In such
cases, inaccuracies of the applied inference methods will result in noisy
outputs. In this study, we propose a novel approach for causal inference when
one or more key variables are noisy. Our method utilizes the knowledge about
the uncertainty of the real values of key variables in order to reduce the bias
induced by noisy measurements. We evaluate our approach in comparison with
existing methods both on simulated and real scenarios and we demonstrate that
our method reduces the bias and avoids false causal inference conclusions in
most cases.Comment: In Proceedings of International Joint Conference Of Neural Networks
(IJCNN) 201
MALTS: Matching After Learning to Stretch
We introduce a flexible framework that produces high-quality almost-exact
matches for causal inference. Most prior work in matching uses ad-hoc distance
metrics, often leading to poor quality matches, particularly when there are
irrelevant covariates. In this work, we learn an interpretable distance metric
for matching, which leads to substantially higher quality matches. The learned
distance metric stretches the covariate space according to each covariate's
contribution to outcome prediction: this stretching means that mismatches on
important covariates carry a larger penalty than mismatches on irrelevant
covariates. Our ability to learn flexible distance metrics leads to matches
that are interpretable and useful for the estimation of conditional average
treatment effects.Comment: 40 pages, 5 Tables, 12 Figure
- …