350 research outputs found

    Graphical continuous Lyapunov models

    Full text link
    The linear Lyapunov equation of a covariance matrix parametrizes the equilibrium covariance matrix of a stochastic process. This parametrization can be interpreted as a new graphical model class, and we show how the model class behaves under marginalization and introduce a method for structure learning via 1\ell_1-penalized loss minimization. Our proposed method is demonstrated to outperform alternative structure learning algorithms in a simulation study, and we illustrate its application for protein phosphorylation network reconstruction.Comment: 10 pages, 5 figure

    D'ya like DAGs? A Survey on Structure Learning and Causal Discovery

    Full text link
    Causal reasoning is a crucial part of science and human intelligence. In order to discover causal relationships from data, we need structure discovery methods. We provide a review of background theory and a survey of methods for structure discovery. We primarily focus on modern, continuous optimization methods, and provide reference to further resources such as benchmark datasets and software packages. Finally, we discuss the assumptive leap required to take us from structure to causality.Comment: 35 page

    Syy-seuraussuhteiden oppiminen piilomuuttujien vaikutuksessa

    Get PDF
    The causal relationships determining the behaviour of a system under study are inherently directional: by manipulating a cause we can control its effect, but an effect cannot be used to control its cause. Understanding the network of causal relationships is necessary, for example, if we want to predict the behaviour in settings where the system is subject to different manipulations. However, we are rarely able to directly observe the causal processes in action; we only see the statistical associations they induce in the collected data. This thesis considers the discovery of the fundamental causal relationships from data in several different learning settings and under various modeling assumptions. Although the research is mostly theoretical, possible application areas include biology, medicine, economics and the social sciences. Latent confounders, unobserved common causes of two or more observed parts of a system, are especially troublesome when discovering causal relations. The statistical dependence relations induced by such latent confounders often cannot be distinguished from directed causal relationships. Possible presence of feedback, that induces a cyclic causal structure, provides another complicating factor. To achieve informative learning results in this challenging setting, some restricting assumptions need to be made. One option is to constrain the functional forms of the causal relationships to be smooth and simple. In particular, we explore how linearity of the causal relations can be effectively exploited. Another common assumption under study is causal faithfulness, with which we can deduce the lack of causal relations from the lack of statistical associations. Along with these assumptions, we use data from randomized experiments, in which the system under study is observed under different interventions and manipulations. In particular, we present a full theoretical foundation of learning linear cyclic models with latent variables using second order statistics in several experimental data sets. This includes sufficient and necessary conditions on the different experimental settings needed for full model identification, a provably complete learning algorithm and characterization of the underdetermination when the data do not allow for full model identification. We also consider several ways of exploiting the faithfulness assumption for this model class. We are able to learn from overlapping data sets, in which different (but overlapping) subsets of variables are observed. In addition, we formulate a model class called Noisy-OR models with latent confounding. We prove sufficient and worst case necessary conditions for the identifiability of the full model and derive several learning algorithms. The thesis also suggests the optimal sets of experiments for the identification of the above models and others. For settings without latent confounders, we develop a Bayesian learning algorithm that is able to exploit non-Gaussianity in passively observed data.Syy-seuraussuhteet, jotka viime kädessä määrittävät tutkittavan järjestelmän toiminnan, ovat suunnattuja: syyhyn puuttumalla voimme vaikuttaa seuraukseen, mutta seuraukseen puuttumalla ei voida vaikuttaa syyhyn. Syy-seuraussuhteiden verkon tunteminen on ensiarvoisen tärkeää, erityisesti jos haluamme todella ymmärtää miten järjestelmä toimii, esimerkiksi kun sitä manipuloidaan tai muutetaan. Useimmiten syy-seuraus mekanismien toimintaa ei voida suoraan nähdä, ainostaan mekanismien aikaansaamat tilastolliset riippuvuudet havaitaan. Tässä väitöskirjassa esitellään menetelmiä syy-seuraussuhteiden oppimiseen havaituista riippuvuuksista tilastollisessa datassa, erilaisissa ympäristöissä ja tilanteissa. Tutkimuksen lähtökohta on teoreettinen, mahdollisia sovelluskohteita voi löytyä mm. biologiasta, lääketieteestä, taloustieteestä ja yhteiskuntatieteestä. Erityinen hankaluus syy-seuraussuhteiden oppimisen kannalta ovat piilomuuttujat, jotka vastaavat tutkittavan järjestelmän mittaamattomia osia. Piilomuuttujat voivat saada aikaan tilastollisia riippuvuuksia, joita on vaikea erottaa syy-seuraussuhteiden aiheuttamista riippuvuuksista. Syy-seuraussuhdeverkot voivat myös pitää sisällään syklejä. Jotta seuraussuhteita voidaan oppia näissä tilanteissa, tarvitaan muita yksinkertaistavia oletuksia. Yksittäisten seuraussuhteiden kompleksisuutta voidaan rajoittaa esimerkiksi lineaariseksi. Myös niin kutsuttu uskollisuusoletus, jonka mukaan eri seuraussuhteet eivät täysin kumoa toistensa vaikutusta, on hyödyllinen. Jossain tapauksissa tutkittavasta järjestelmästä saadaan havaintoja siihen itse vaikuttaen, esimerkiksi satunnaistetuissa kokeissa. Väitöskirjassa esitellään useita oppimismenetelmiä, useissa eri oppimistilainteissa, eri oletusten vallitessa. Syy-seuraussuhteita opitaan käyttäen erilaisissa koetilanteissa havaittua dataa. Erityisesti tarkastellaan teoreettisesti mitä seuraussuhteita voidaan oppia missäkin tilanteessa ja mitä ei. Väitöskirjassa esitellään myös optimaalisia koejärjestelyitä

    Causal discovery beyond Markov equivalence

    Get PDF
    The focus of the dissertation is on learning causal diagrams beyond Markov equivalence. The baseline assumptions in causal structure learning are the acyclicity of the underlying structure and causal sufficiency, which requires that there are no unobserved confounder variables in the system. Under these assumptions, conditional independence relationships contain all the information in the distribution that can be used for structure learning. Therefore, the causal diagram can be identified only up to Markov equivalence, which is the set of structures reflecting the same conditional independence relationships. Hence, for many ground truth structures, the direction of a large portion of the edges will remain unidentified. Hence, in order to learn the structure beyond Markov equivalence, generating or having access to extra joint distributions from the perturbed causal system is required. There are two main scenarios for acquiring the extra joint distributions. The first and main scenario is when an experimenter is directly performing a sequence of interventions on subsets of the variables of the system to generate interventional distributions. We refer to the task of causal discovery from such interventional data as interventional causal structure learning. In this setting, the key question is determining which variables should be intervened on to gain the most information. This is the first focus of this dissertation. The second scenario for acquiring the extra joint distributions is when a subset of causal mechanisms, and consequently the joint distribution of the system, have varied or evolved due to reasons beyond the control of the experimenter. In this case, it is not even a priori known to the experimenter which causal mechanisms have varied. We refer to the task of causal discovery from such multi-domain data as multi-domain causal structure learning. In this setup the main question is how one can take the most advantage of the changes across domains for the task of causal discovery. This is the second focus of this dissertation. Next, we consider cases under which conditional independency may not reflect all the information in the distribution that can be used to identify the underlying structure. One such case is when cycles are allowed in the underlying structure. Unfortunately, a suitable characterization for equivalence for the case of cyclic directed graphs has been unknown so far. The third focus of this dissertation is on bridging the gap between cyclic and acyclic directed graphs by introducing a general approach for equivalence characterization and structure learning. Another case in which conditional independency may not reflect all the information in the distribution is when there are extra assumptions on the generating causal modules. A seminal result in this direction is that a linear model with non-Gaussian exogenous variables is uniquely identifiable. As the forth focus of this dissertation, we consider this setup, yet go one step further and allow for violation of causal sufficiency, and investigate how this generalization affects the identifiability

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area

    Bayesian stochastic blockmodels for community detection in networks and community-structured covariance selection

    Full text link
    Networks have been widely used to describe interactions among objects in diverse fields. Given the interest in explaining a network by its structure, much attention has been drawn to finding clusters of nodes with dense connections within clusters but sparse connections between clusters. Such clusters are called communities, and identifying such clusters is known as community detection. Here, to perform community detection, I focus on stochastic blockmodels (SBM), a class of statistically-based generative models. I present a flexible SBM that represents different types of data as well as node attributes under a Bayesian framework. The proposed models explicitly capture community behavior by guaranteeing that connections are denser within communities than between communities. First, I present a degree-corrected SBM based on a logistic regression formulation to model binary networks. To fit the model, I obtain posterior samples via Gibbs sampling based on Polya-Gamma latent variables. I conduct inference based on a novel, canonically mapped centroid estimator that formally addresses label non-identifiability and captures representative community assignments. Next, to accommodate large-scale datasets, I further extend the degree-corrected SBM to a broader family of generalized linear models with group correction terms. To conduct exact inference efficiently, I develop an iteratively-reweighted least squares procedure that implicitly updates sufficient statistics on the network to obtain maximum a posteriori (MAP) estimators. I demonstrate the proposed model and estimation on simulated benchmark networks and various real-world datasets. Finally, I develop a Bayesian SBM for community-structured covariance selection. Here, I assume that the data at each node are Gaussian and a latent network where two nodes are not connected if their observations are conditionally independent given observations of other nodes. Under the context of biological and social applications, I expect that this latent network shows a block dependency structure that represents community behavior. Thus, to identify the latent network and detect communities, I propose a hierarchical prior in two levels: a spike-and-slab prior on off-diagonal entries of the concentration matrix for variable selection and a degree-corrected SBM to capture community behavior. I develop an efficient routine based on ridge regularization and MAP estimation to conduct inference
    corecore