398 research outputs found
Ancestral Causal Inference
Constraint-based causal discovery from limited data is a notoriously
difficult challenge due to the many borderline independence test decisions.
Several approaches to improve the reliability of the predictions by exploiting
redundancy in the independence information have been proposed recently. Though
promising, existing approaches can still be greatly improved in terms of
accuracy and scalability. We present a novel method that reduces the
combinatorial explosion of the search space by using a more coarse-grained
representation of causal information, drastically reducing computation time.
Additionally, we propose a method to score causal predictions based on their
confidence. Crucially, our implementation also allows one to easily combine
observational and interventional data and to incorporate various types of
available background knowledge. We prove soundness and asymptotic consistency
of our method and demonstrate that it can outperform the state-of-the-art on
synthetic data, achieving a speedup of several orders of magnitude. We
illustrate its practical feasibility by applying it on a challenging protein
data set.Comment: In Proceedings of Advances in Neural Information Processing Systems
29 (NIPS 2016
A Survey on Causal Discovery Methods for Temporal and Non-Temporal Data
Causal Discovery (CD) is the process of identifying the cause-effect
relationships among the variables from data. Over the years, several methods
have been developed primarily based on the statistical properties of data to
uncover the underlying causal mechanism. In this study we introduce the common
terminologies in causal discovery, and provide a comprehensive discussion of
the approaches designed to identify the causal edges in different settings. We
further discuss some of the benchmark datasets available for evaluating the
performance of the causal discovery algorithms, available tools to perform
causal discovery readily, and the common metrics used to evaluate these
methods. Finally, we conclude by presenting the common challenges involved in
CD and also, discuss the applications of CD in multiple areas of interest
D'ya like DAGs? A Survey on Structure Learning and Causal Discovery
Causal reasoning is a crucial part of science and human intelligence. In
order to discover causal relationships from data, we need structure discovery
methods. We provide a review of background theory and a survey of methods for
structure discovery. We primarily focus on modern, continuous optimization
methods, and provide reference to further resources such as benchmark datasets
and software packages. Finally, we discuss the assumptive leap required to take
us from structure to causality.Comment: 35 page
Joint Causal Inference from Multiple Contexts
The gold standard for discovering causal relations is by means of
experimentation. Over the last decades, alternative methods have been proposed
that can infer causal relations between variables from certain statistical
patterns in purely observational data. We introduce Joint Causal Inference
(JCI), a novel approach to causal discovery from multiple data sets from
different contexts that elegantly unifies both approaches. JCI is a causal
modeling framework rather than a specific algorithm, and it can be implemented
using any causal discovery algorithm that can take into account certain
background knowledge. JCI can deal with different types of interventions (e.g.,
perfect, imperfect, stochastic, etc.) in a unified fashion, and does not
require knowledge of intervention targets or types in case of interventional
data. We explain how several well-known causal discovery algorithms can be seen
as addressing special cases of the JCI framework, and we also propose novel
implementations that extend existing causal discovery methods for purely
observational data to the JCI setting. We evaluate different JCI
implementations on synthetic data and on flow cytometry protein expression data
and conclude that JCI implementations can considerably outperform
state-of-the-art causal discovery algorithms.Comment: Final version, as published by JML
Combinatorial and algebraic perspectives on the marginal independence structure of Bayesian networks
We consider the problem of estimating the marginal independence structure of
a Bayesian network from observational data in the form of an undirected graph
called the unconditional dependence graph. We show that unconditional
dependence graphs of Bayesian networks correspond to the graphs having equal
independence and intersection numbers. Using this observation, a Gr\"obner
basis for a toric ideal associated to unconditional dependence graphs of
Bayesian networks is given and then extended by additional binomial relations
to connect the space of all such graphs. An MCMC method, called GrUES
(Gr\"obner-based Unconditional Equivalence Search), is implemented based on the
resulting moves and applied to synthetic Gaussian data. GrUES recovers the true
marginal independence structure via a penalized maximum likelihood or MAP
estimate at a higher rate than simple independence tests while also yielding an
estimate of the posterior, for which the HPD credible sets include the
true structure at a high rate for data-generating graphs with density at least
.Comment: 60 pages, 13 figure, 3 table
- …