150 research outputs found

    ASP-based Discovery of Semi-Markovian Causal Models under Weaker Assumptions

    Get PDF
    In recent years the possibility of relaxing the so-called Faithfulness assumption in automated causal discovery has been investigated. The investigation showed (1) that the Faithfulness assumption can be weakened in various ways that in an important sense preserve its power, and (2) that weakening of Faithfulness may help to speed up methods based on Answer Set Programming. However, this line of work has so far only considered the discovery of causal models without latent variables. In this paper, we study weakenings of Faithfulness for constraint-based discovery of semi-Markovian causal models, which accommodate the possibility of latent variables, and show that both (1) and (2) remain the case in this more realistic setting

    Learning Adjustment Sets from Observational and Limited Experimental Data

    Full text link
    Estimating causal effects from observational data is not always possible due to confounding. Identifying a set of appropriate covariates (adjustment set) and adjusting for their influence can remove confounding bias; however, such a set is typically not identifiable from observational data alone. Experimental data do not have confounding bias, but are typically limited in sample size and can therefore yield imprecise estimates. Furthermore, experimental data often include a limited set of covariates, and therefore provide limited insight into the causal structure of the underlying system. In this work we introduce a method that combines large observational and limited experimental data to identify adjustment sets and improve the estimation of causal effects. The method identifies an adjustment set (if possible) by calculating the marginal likelihood for the experimental data given observationally-derived prior probabilities of potential adjustmen sets. In this way, the method can make inferences that are not possible using only the conditional dependencies and independencies in all the observational and experimental data. We show that the method successfully identifies adjustment sets and improves causal effect estimation in simulated data, and it can sometimes make additional inferences when compared to state-of-the-art methods for combining experimental and observational data.Comment: 10 pages, 5 figure

    Causal Razors

    Full text link
    When performing causal discovery, assumptions have to be made on how the true causal mechanism corresponds to the underlying joint probability distribution. These assumptions are labeled as causal razors in this work. We review numerous causal razors that appeared in the literature, and offer a comprehensive logical comparison of them. In particular, we scrutinize an unpopular causal razor, namely parameter minimality, in multinomial causal models and its logical relations with other well-studied causal razors. Our logical result poses a dilemma in selecting a reasonable scoring criterion for score-based casual search algorithms.Comment: 29 pages for the main paper. 14 pages for the supplementary material

    Sensitivity analyses for causal inference

    Get PDF

    Enabling Runtime Verification of Causal Discovery Algorithms with Automated Conditional Independence Reasoning (Extended Version)

    Full text link
    Causal discovery is a powerful technique for identifying causal relationships among variables in data. It has been widely used in various applications in software engineering. Causal discovery extensively involves conditional independence (CI) tests. Hence, its output quality highly depends on the performance of CI tests, which can often be unreliable in practice. Moreover, privacy concerns arise when excessive CI tests are performed. Despite the distinct nature between unreliable and excessive CI tests, this paper identifies a unified and principled approach to addressing both of them. Generally, CI statements, the outputs of CI tests, adhere to Pearl's axioms, which are a set of well-established integrity constraints on conditional independence. Hence, we can either detect erroneous CI statements if they violate Pearl's axioms or prune excessive CI statements if they are logically entailed by Pearl's axioms. Holistically, both problems boil down to reasoning about the consistency of CI statements under Pearl's axioms (referred to as CIR problem). We propose a runtime verification tool called CICheck, designed to harden causal discovery algorithms from reliability and privacy perspectives. CICheck employs a sound and decidable encoding scheme that translates CIR into SMT problems. To solve the CIR problem efficiently, CICheck introduces a four-stage decision procedure with three lightweight optimizations that actively prove or refute consistency, and only resort to costly SMT-based reasoning when necessary. Based on the decision procedure to CIR, CICheck includes two variants: ED-CICheck and ED-CICheck, which detect erroneous CI tests (to enhance reliability) and prune excessive CI tests (to enhance privacy), respectively. [abridged due to length limit

    The Minimal Modal Interpretation of Quantum Theory

    Get PDF
    We introduce a realist, unextravagant interpretation of quantum theory that builds on the existing physical structure of the theory and allows experiments to have definite outcomes, but leaves the theory's basic dynamical content essentially intact. Much as classical systems have specific states that evolve along definite trajectories through configuration spaces, the traditional formulation of quantum theory asserts that closed quantum systems have specific states that evolve unitarily along definite trajectories through Hilbert spaces, and our interpretation extends this intuitive picture of states and Hilbert-space trajectories to the case of open quantum systems as well. We provide independent justification for the partial-trace operation for density matrices, reformulate wave-function collapse in terms of an underlying interpolating dynamics, derive the Born rule from deeper principles, resolve several open questions regarding ontological stability and dynamics, address a number of familiar no-go theorems, and argue that our interpretation is ultimately compatible with Lorentz invariance. Along the way, we also investigate a number of unexplored features of quantum theory, including an interesting geometrical structure---which we call subsystem space---that we believe merits further study. We include an appendix that briefly reviews the traditional Copenhagen interpretation and the measurement problem of quantum theory, as well as the instrumentalist approach and a collection of foundational theorems not otherwise discussed in the main text.Comment: 73 pages + references, 9 figures; cosmetic changes, added figure, updated references, generalized conditional probabilities with attendant changes to the sections on the EPR-Bohm thought experiment and Lorentz invariance; for a concise summary, see the companion letter at arXiv:1405.675
    corecore