23,943 research outputs found

    Identifying Nonlinear 1-Step Causal Influences in Presence of Latent Variables

    Full text link
    We propose an approach for learning the causal structure in stochastic dynamical systems with a 11-step functional dependency in the presence of latent variables. We propose an information-theoretic approach that allows us to recover the causal relations among the observed variables as long as the latent variables evolve without exogenous noise. We further propose an efficient learning method based on linear regression for the special sub-case when the dynamics are restricted to be linear. We validate the performance of our approach via numerical simulations

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    Causal learning for partially observed stochastic dynamical systems

    Get PDF

    Learning causal structure from undersampled time series

    Get PDF
    Abstract Even if one can experiment on relevant factors, learning the causal structure of a dynamical system can be quite difficult if the relevant measurement processes occur at a much slower sampling rate than the "true" underlying dynamics. This problem is exacerbated if the degree of mismatch is unknown. This paper gives a formal characterization of this learning problem, and then provides two sets of results. First, we prove a set of theorems characterizing how causal structures change under undersampling. Second, we develop an algorithm for inferring aspects of the causal structure at the "true" timescale from the causal structure learned from the undersampled data. Research on causal learning in dynamical contexts has largely ignored the challenges of undersampling, but this paper provides a framework and foundation for learning causal structure from this type of complex time series data. Keywords: Causal learning, causal inference, time series, undersampling Causal Inference, Time Series, and Undersampling When faced with a difficult causal learning challenge, one often turns to experimentation or interventions as a way to reduce the complexity of the problem (e.g., by eliminating the influence of unobserved common causes). This strategy is substantially more complicated, however, when learning the causal structure of a dynamical system. In particular, the "proper" experimental strategy is often to provide some exogenous shock to the dynamical system, measure its evolution, and then apply a causal learning algorithm to the resulting time series data, such as Demiralp and Hoove
    • …
    corecore