40 research outputs found

    backShift: Learning causal cyclic graphs from unknown shift interventions

    Full text link
    We propose a simple method to learn linear causal cyclic models in the presence of latent variables. The method relies on equilibrium data of the model recorded under a specific kind of interventions ("shift interventions"). The location and strength of these interventions do not have to be known and can be estimated from the data. Our method, called backShift, only uses second moments of the data and performs simple joint matrix diagonalization, applied to differences between covariance matrices. We give a sufficient and necessary condition for identifiability of the system, which is fulfilled almost surely under some quite general assumptions if and only if there are at least three distinct experimental settings, one of which can be pure observational data. We demonstrate the performance on some simulated data and applications in flow cytometry and financial time series. The code is made available as R-package backShift

    A Proposal on Discovering Causal Structures inTechnical Systems by Means of Interventions

    Get PDF
    Causal Discovery has become an area of high interest for researchers. It haslead to great advances in medicine, in the social sciences and in genetics. Butup til now, it is hardly used to identify causal relations in technical systems.This paper presents the basic building blocks for in-depth research. This paperreviews established causal discovery methods and causal models. In contrast toexisting surveys of this domain, we focus on the causal discovery methods usinginterventions. Based thereon, we propose the idea of a promising interventionaldiscovery approach for technical systems. It takes advantage of not only direct,but also indirect causal relationships, which might improve the learning processof causal structures

    Ancestral Causal Inference

    Get PDF
    Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions. Several approaches to improve the reliability of the predictions by exploiting redundancy in the independence information have been proposed recently. Though promising, existing approaches can still be greatly improved in terms of accuracy and scalability. We present a novel method that reduces the combinatorial explosion of the search space by using a more coarse-grained representation of causal information, drastically reducing computation time. Additionally, we propose a method to score causal predictions based on their confidence. Crucially, our implementation also allows one to easily combine observational and interventional data and to incorporate various types of available background knowledge. We prove soundness and asymptotic consistency of our method and demonstrate that it can outperform the state-of-the-art on synthetic data, achieving a speedup of several orders of magnitude. We illustrate its practical feasibility by applying it on a challenging protein data set.Comment: In Proceedings of Advances in Neural Information Processing Systems 29 (NIPS 2016
    corecore