2,683 research outputs found
Towards a Learning Theory of Cause-Effect Inference
We pose causal inference as the problem of learning to classify probability
distributions. In particular, we assume access to a collection
, where each is a sample drawn from the
probability distribution of , and is a binary label
indicating whether "" or "". Given these data,
we build a causal inference rule in two steps. First, we featurize each
using the kernel mean embedding associated with some characteristic kernel.
Second, we train a binary classifier on such embeddings to distinguish between
causal directions. We present generalization bounds showing the statistical
consistency and learning rates of the proposed approach, and provide a simple
implementation that achieves state-of-the-art cause-effect inference.
Furthermore, we extend our ideas to infer causal relationships between more
than two variables
Distinguishing cause from effect using observational data: methods and benchmarks
The discovery of causal relationships from purely observational data is a
fundamental problem in science. The most elementary form of such a causal
discovery problem is to decide whether X causes Y or, alternatively, Y causes
X, given joint observations of two variables X, Y. An example is to decide
whether altitude causes temperature, or vice versa, given only joint
measurements of both variables. Even under the simplifying assumptions of no
confounding, no feedback loops, and no selection bias, such bivariate causal
discovery problems are challenging. Nevertheless, several approaches for
addressing those problems have been proposed in recent years. We review two
families of such methods: Additive Noise Methods (ANM) and Information
Geometric Causal Inference (IGCI). We present the benchmark CauseEffectPairs
that consists of data for 100 different cause-effect pairs selected from 37
datasets from various domains (e.g., meteorology, biology, medicine,
engineering, economy, etc.) and motivate our decisions regarding the "ground
truth" causal directions of all pairs. We evaluate the performance of several
bivariate causal discovery methods on these real-world benchmark data and in
addition on artificially simulated data. Our empirical results on real-world
data indicate that certain methods are indeed able to distinguish cause from
effect using only purely observational data, although more benchmark data would
be needed to obtain statistically significant conclusions. One of the best
performing methods overall is the additive-noise method originally proposed by
Hoyer et al. (2009), which obtains an accuracy of 63+-10 % and an AUC of
0.74+-0.05 on the real-world benchmark. As the main theoretical contribution of
this work we prove the consistency of that method.Comment: 101 pages, second revision submitted to Journal of Machine Learning
Researc
Structural Agnostic Modeling: Adversarial Learning of Causal Graphs
A new causal discovery method, Structural Agnostic Modeling (SAM), is
presented in this paper. Leveraging both conditional independencies and
distributional asymmetries in the data, SAM aims at recovering full causal
models from continuous observational data along a multivariate non-parametric
setting. The approach is based on a game between players estimating each
variable distribution conditionally to the others as a neural net, and an
adversary aimed at discriminating the overall joint conditional distribution,
and that of the original data. An original learning criterion combining
distribution estimation, sparsity and acyclicity constraints is used to enforce
the end-to-end optimization of the graph structure and parameters through
stochastic gradient descent. Besides the theoretical analysis of the approach
in the large sample limit, SAM is extensively experimentally validated on
synthetic and real data
- …