21 research outputs found

    Supplement to: A Bayesian approach to constraint based . . .

    Get PDF
    This article contains additional results and proofs related to §3.3 'Unfaithful inference: DAGs vs. MAGs' in the UAI-2012 submission 'A Bayesian Approach to Constraint Based Causal Inference'

    Ancestral Causal Inference

    Get PDF
    Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions. Several approaches to improve the reliability of the predictions by exploiting redundancy in the independence information have been proposed recently. Though promising, existing approaches can still be greatly improved in terms of accuracy and scalability. We present a novel method that reduces the combinatorial explosion of the search space by using a more coarse-grained representation of causal information, drastically reducing computation time. Additionally, we propose a method to score causal predictions based on their confidence. Crucially, our implementation also allows one to easily combine observational and interventional data and to incorporate various types of available background knowledge. We prove soundness and asymptotic consistency of our method and demonstrate that it can outperform the state-of-the-art on synthetic data, achieving a speedup of several orders of magnitude. We illustrate its practical feasibility by applying it on a challenging protein data set.Comment: In Proceedings of Advances in Neural Information Processing Systems 29 (NIPS 2016

    Causal Graph Discovery For Hydrological Time Series Knowledge Discovery

    Full text link
    Causal inference or causal relationship discovery is an important task in hydrological study to explore the causes of abnormal hydrology phenomena such as drought and flood, which will help improving our prediction and response ability to natural disasters. Different from generic causality study where causalrelation discovery is sufficient, for extreme hydrological situation prediction and modeling, we need not only to construct a causal graph to reveal the contributing factors, but also to provide the lead time of each cause to its effect. Lead time is the time difference between the occurrence of lead and effect. Though causal inference or causal relationship discovery has been a major topic in many science problems, majority of the work has been focused on the validity of such relationship with no knowledge on cause-effect time lead information. Such insight is critical for hydrological modeling and prediction, in which time lead information is desired for knowing how long different factors will affect certain extreme situations such as flood or drought. The most commonly used computational algorithms for causality discovered can be categorized as using regression approaches or Bayesian approaches. Regression based approaches such as Granger\u27s causality assume linear causality and first order causal relationship. Bayesian approaches, such as the PC algorithm from Pearl\u27s causality definition, have exponential runtime complexity which makes it difficult to be applied to hydrological systems with a high number of variables. Furthermore, no existing approaches incorporate the lead time concept in the discovery of causal relationship. In this paper, we propose a new approach, mutual information causal (MI-Causal), for causal relationship discovery, which embodies the advantages of existing approaches and overcomes the limitations to satisfy the hydrologic need. The experimental results from both synthetic and real time hydrological data show that our proposed method outperforms regression approaches and Bayesian based approaches

    Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning

    Full text link
    We propose to explain the behavior of black-box prediction methods (e.g., deep neural networks trained on image pixel data) using causal graphical models. Specifically, we explore learning the structure of a causal graph where the nodes represent prediction outcomes along with a set of macro-level "interpretable" features, while allowing for arbitrary unmeasured confounding among these variables. The resulting graph may indicate which of the interpretable features, if any, are possible causes of the prediction outcome and which may be merely associated with prediction outcomes due to confounding. The approach is motivated by a counterfactual theory of causal explanation wherein good explanations point to factors which are "difference-makers" in an interventionist sense. The resulting analysis may be useful in algorithm auditing and evaluation, by identifying features which make a causal difference to the algorithm's output

    Neuropathic Pain Diagnosis Simulator for Causal Discovery Algorithm Evaluation

    Full text link
    Discovery of causal relations from observational data is essential for many disciplines of science and real-world applications. However, unlike other machine learning algorithms, whose development has been greatly fostered by a large amount of available benchmark datasets, causal discovery algorithms are notoriously difficult to be systematically evaluated because few datasets with known ground-truth causal relations are available. In this work, we handle the problem of evaluating causal discovery algorithms by building a flexible simulator in the medical setting. We develop a neuropathic pain diagnosis simulator, inspired by the fact that the biological processes of neuropathic pathophysiology are well studied with well-understood causal influences. Our simulator exploits the causal graph of the neuropathic pain pathology and its parameters in the generator are estimated from real-life patient cases. We show that the data generated from our simulator have similar statistics as real-world data. As a clear advantage, the simulator can produce infinite samples without jeopardizing the privacy of real-world patients. Our simulator provides a natural tool for evaluating various types of causal discovery algorithms, including those to deal with practical issues in causal discovery, such as unknown confounders, selection bias, and missing data. Using our simulator, we have evaluated extensively causal discovery algorithms under various settings.Comment: Accepted by NeurIPS 2019, 6 figures, 10 table

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area
    corecore