2,277 research outputs found
The lesson of causal discovery algorithms for quantum correlations: Causal explanations of Bell-inequality violations require fine-tuning
An active area of research in the fields of machine learning and statistics
is the development of causal discovery algorithms, the purpose of which is to
infer the causal relations that hold among a set of variables from the
correlations that these exhibit. We apply some of these algorithms to the
correlations that arise for entangled quantum systems. We show that they cannot
distinguish correlations that satisfy Bell inequalities from correlations that
violate Bell inequalities, and consequently that they cannot do justice to the
challenges of explaining certain quantum correlations causally. Nonetheless, by
adapting the conceptual tools of causal inference, we can show that any attempt
to provide a causal explanation of nonsignalling correlations that violate a
Bell inequality must contradict a core principle of these algorithms, namely,
that an observed statistical independence between variables should not be
explained by fine-tuning of the causal parameters. In particular, we
demonstrate the need for such fine-tuning for most of the causal mechanisms
that have been proposed to underlie Bell correlations, including superluminal
causal influences, superdeterminism (that is, a denial of freedom of choice of
settings), and retrocausal influences which do not introduce causal cycles.Comment: 29 pages, 28 figs. New in v2: a section presenting in detail our
characterization of Bell's theorem as a contradiction arising from (i) the
framework of causal models, (ii) the principle of no fine-tuning, and (iii)
certain operational features of quantum theory; a section explaining why a
denial of hidden variables affords even fewer opportunities for causal
explanations of quantum correlation
We Are Not Your Real Parents: Telling Causal from Confounded using MDL
Given data over variables we consider the problem of finding out whether jointly causes or whether they are all confounded by an unobserved latent variable . To do so, we take an information-theoretic approach based on Kolmogorov complexity. In a nutshell, we follow the postulate that first encoding the true cause, and then the effects given that cause, results in a shorter description than any other encoding of the observed variables. The ideal score is not computable, and hence we have to approximate it. We propose to do so using the Minimum Description Length (MDL) principle. We compare the MDL scores under the models where causes and where there exists a latent variables confounding both and and show our scores are consistent. To find potential confounders we propose using latent factor modeling, in particular, probabilistic PCA (PPCA). Empirical evaluation on both synthetic and real-world data shows that our method, CoCa, performs very well -- even when the true generating process of the data is far from the assumptions made by the models we use. Moreover, it is robust as its accuracy goes hand in hand with its confidence
- …