47,367 research outputs found
Causal Discovery in Linear Latent Variable Models Subject to Measurement Error
We focus on causal discovery in the presence of measurement error in linear
systems where the mixing matrix, i.e., the matrix indicating the independent
exogenous noise terms pertaining to the observed variables, is identified up to
permutation and scaling of the columns. We demonstrate a somewhat surprising
connection between this problem and causal discovery in the presence of
unobserved parentless causes, in the sense that there is a mapping, given by
the mixing matrix, between the underlying models to be inferred in these
problems. Consequently, any identifiability result based on the mixing matrix
for one model translates to an identifiability result for the other model. We
characterize to what extent the causal models can be identified under a
two-part faithfulness assumption. Under only the first part of the assumption
(corresponding to the conventional definition of faithfulness), the structure
can be learned up to the causal ordering among an ordered grouping of the
variables but not all the edges across the groups can be identified. We further
show that if both parts of the faithfulness assumption are imposed, the
structure can be learned up to a more refined ordered grouping. As a result of
this refinement, for the latent variable model with unobserved parentless
causes, the structure can be identified. Based on our theoretical results, we
propose causal structure learning methods for both models, and evaluate their
performance on synthetic data.Comment: Accepted at 36th Conference on Neural Information Processing Systems
(NeurIPS 2022
Probabilistic Matching: Causal Inference under Measurement Errors
The abundance of data produced daily from large variety of sources has
boosted the need of novel approaches on causal inference analysis from
observational data. Observational data often contain noisy or missing entries.
Moreover, causal inference studies may require unobserved high-level
information which needs to be inferred from other observed attributes. In such
cases, inaccuracies of the applied inference methods will result in noisy
outputs. In this study, we propose a novel approach for causal inference when
one or more key variables are noisy. Our method utilizes the knowledge about
the uncertainty of the real values of key variables in order to reduce the bias
induced by noisy measurements. We evaluate our approach in comparison with
existing methods both on simulated and real scenarios and we demonstrate that
our method reduces the bias and avoids false causal inference conclusions in
most cases.Comment: In Proceedings of International Joint Conference Of Neural Networks
(IJCNN) 201
Regularizing towards Causal Invariance: Linear Models with Proxies
We propose a method for learning linear models whose predictive performance
is robust to causal interventions on unobserved variables, when noisy proxies
of those variables are available. Our approach takes the form of a
regularization term that trades off between in-distribution performance and
robustness to interventions. Under the assumption of a linear structural causal
model, we show that a single proxy can be used to create estimators that are
prediction optimal under interventions of bounded strength. This strength
depends on the magnitude of the measurement noise in the proxy, which is, in
general, not identifiable. In the case of two proxy variables, we propose a
modified estimator that is prediction optimal under interventions up to a known
strength. We further show how to extend these estimators to scenarios where
additional information about the "test time" intervention is available during
training. We evaluate our theoretical findings in synthetic experiments and
using real data of hourly pollution levels across several cities in China.Comment: ICML 2021 (to appear
- …