4,233 research outputs found
Causal Discovery with Continuous Additive Noise Models
We consider the problem of learning causal directed acyclic graphs from an
observational joint distribution. One can use these graphs to predict the
outcome of interventional experiments, from which data are often not available.
We show that if the observational distribution follows a structural equation
model with an additive noise structure, the directed acyclic graph becomes
identifiable from the distribution under mild conditions. This constitutes an
interesting alternative to traditional methods that assume faithfulness and
identify only the Markov equivalence class of the graph, thus leaving some
edges undirected. We provide practical algorithms for finitely many samples,
RESIT (Regression with Subsequent Independence Test) and two methods based on
an independence score. We prove that RESIT is correct in the population setting
and provide an empirical evaluation
Graphical Markov models: overview
We describe how graphical Markov models started to emerge in the last 40
years, based on three essential concepts that had been developed independently
more than a century ago. Sequences of joint or single regressions and their
regression graphs are singled out as being best suited for analyzing
longitudinal data and for tracing developmental pathways. Interpretations are
illustrated using two sets of data and some of the more recent, important
results for sequences of regressions are summarized.Comment: 22 pages, 9 figure
Identifiability of Causal Graphs using Functional Models
This work addresses the following question: Under what assumptions on the
data generating process can one infer the causal graph from the joint
distribution? The approach taken by conditional independence-based causal
discovery methods is based on two assumptions: the Markov condition and
faithfulness. It has been shown that under these assumptions the causal graph
can be identified up to Markov equivalence (some arrows remain undirected)
using methods like the PC algorithm. In this work we propose an alternative by
defining Identifiable Functional Model Classes (IFMOCs). As our main theorem we
prove that if the data generating process belongs to an IFMOC, one can identify
the complete causal graph. To the best of our knowledge this is the first
identifiability result of this kind that is not limited to linear functional
relationships. We discuss how the IFMOC assumption and the Markov and
faithfulness assumptions relate to each other and explain why we believe that
the IFMOC assumption can be tested more easily on given data. We further
provide a practical algorithm that recovers the causal graph from finitely many
data; experiments on simulated data support the theoretical findings
- …