3,854 research outputs found
An Upper Bound for Random Measurement Error in Causal Discovery
Causal discovery algorithms infer causal relations from data based on several
assumptions, including notably the absence of measurement error. However, this
assumption is most likely violated in practical applications, which may result
in erroneous, irreproducible results. In this work we show how to obtain an
upper bound for the variance of random measurement error from the covariance
matrix of measured variables and how to use this upper bound as a correction
for constraint-based causal discovery. We demonstrate a practical application
of our approach on both simulated data and real-world protein signaling data.Comment: Published in Proceedings of the 34th Annual Conference on Uncertainty
in Artificial Intelligence (UAI-18
We Are Not Your Real Parents: Telling Causal from Confounded using MDL
Given data over variables we consider the problem of finding out whether jointly causes or whether they are all confounded by an unobserved latent variable . To do so, we take an information-theoretic approach based on Kolmogorov complexity. In a nutshell, we follow the postulate that first encoding the true cause, and then the effects given that cause, results in a shorter description than any other encoding of the observed variables. The ideal score is not computable, and hence we have to approximate it. We propose to do so using the Minimum Description Length (MDL) principle. We compare the MDL scores under the models where causes and where there exists a latent variables confounding both and and show our scores are consistent. To find potential confounders we propose using latent factor modeling, in particular, probabilistic PCA (PPCA). Empirical evaluation on both synthetic and real-world data shows that our method, CoCa, performs very well -- even when the true generating process of the data is far from the assumptions made by the models we use. Moreover, it is robust as its accuracy goes hand in hand with its confidence
- …