124 research outputs found

    Gene Regulatory Network Reconstruction Using Dynamic Bayesian Networks

    Get PDF
    High-content technologies such as DNA microarrays can provide a system-scale overview of how genes interact with each other in a network context. Various mathematical methods and computational approaches have been proposed to reconstruct GRNs, including Boolean networks, information theory, differential equations and Bayesian networks. GRN reconstruction faces huge intrinsic challenges on both experimental and theoretical fronts, because the inputs and outputs of the molecular processes are unclear and the underlying principles are unknown or too complex. In this work, we focused on improving the accuracy and speed of GRN reconstruction with Dynamic Bayesian based method. A commonly used structure-learning algorithm is based on REVEAL (Reverse Engineering Algorithm). However, this method has some limitations when it is used for reconstructing GRNs. For instance, the two-stage temporal Bayes network (2TBN) cannot be well recovered by application of REVEAL; it has low accuracy and speed for high dimensionality networks that has above a hundred nodes; and it even cannot accomplish the task of reconstructing a network with 400 nodes. We implemented an algorithm for DBN structure learning with Friedman\u27s score function to replace REVEAL, and tested it on reconstruction of both synthetic networks and real yeast networks and compared it with REVEAL in the absence or presence of preprocessed network generated by Zou and Conzen\u27s algorithm. The new score metric improved the precision and recall of GRN reconstruction. Networks of gene interactions were reconstructed using a Dynamic Bayesian Network (DBN) approach and were analyzed to identify the mechanism of chemical-induced reversible neurotoxicity through reconstruction of gene regulatory networks in earthworms with tools curating relevant genes from non-model organism\u27s pathway to model organism pathway

    Fast Markov blanket discovery

    Get PDF
    In this thesis, we address the problem of learning the Markov blanket of a quantity from data in an efficient manner. The discovery of the Markov blanket is useful in the feature subset selection problem and in discovering the structure of a Bayesian network. Our contribution is a novel algorithm called FAST-IAMB for the induction of Markov blankets that employs a fast heuristic to quickly converge to the Markov blanket. Empirical results show that the algorithm performs reasonably faster than existing algorithms. In addition, the results also show that the speedup does not adversely affect the accuracy of the recovered Markov blankets

    Discovering Causal Relations and Equations from Data

    Full text link
    Physics is a field of science that has traditionally used the scientific method to answer questions about why natural phenomena occur and to make testable models that explain the phenomena. Discovering equations, laws and principles that are invariant, robust and causal explanations of the world has been fundamental in physical sciences throughout the centuries. Discoveries emerge from observing the world and, when possible, performing interventional studies in the system under study. With the advent of big data and the use of data-driven methods, causal and equation discovery fields have grown and made progress in computer science, physics, statistics, philosophy, and many applied fields. All these domains are intertwined and can be used to discover causal relations, physical laws, and equations from observational data. This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of Physics and outlines the most important challenges and promising future lines of research. We also provide a taxonomy for observational causal and equation discovery, point out connections, and showcase a complete set of case studies in Earth and climate sciences, fluid dynamics and mechanics, and the neurosciences. This review demonstrates that discovering fundamental laws and causal relations by observing natural phenomena is being revolutionised with the efficient exploitation of observational data, modern machine learning algorithms and the interaction with domain knowledge. Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems.Comment: 137 page

    Information-Theoretic Causal Discovery

    Get PDF
    It is well-known that correlation does not equal causation, but how can we infer causal relations from data? Causal discovery tries to answer precisely this question by rigorously analyzing under which assumptions it is feasible to infer causal networks from passively collected, so-called observational data. Particularly, causal discovery aims to infer a directed graph among a set of observed random variables under assumptions which are as realistic as possible. A key assumption in causal discovery is faithfulness. That is, we assume that separations in the true graph imply independencies in the distribution and vice versa. If faithfulness holds and we have access to a perfect independence oracle, traditional causal discovery approaches can infer the Markov equivalence class of the true causal graph---i.e., infer the correct undirected network and even some of the edge directions. In a real-world setting, faithfulness may be violated, however, and neither do we have access to such an independence oracle. Beyond that, we are interested in inferring the complete DAG structure and not just the Markov equivalence class. To circumvent or at least alleviate these limitations, we take an information-theoretic approach. In the first part of this thesis, we consider violations of faithfulness that can be induced by exclusive or relations or cancelling paths, and develop a weaker faithfulness assumption, called 2-adjacency faithfulness, to detect some of these mechanisms. Further, we analyze under which conditions it is possible to infer the correct DAG structure even if such violations occur. In the second part, we focus on independence testing via conditional mutual information (CMI). CMI is an information-theoretic measure of dependence based on Shannon entropy. We first suggest estimating CMI for discrete variables via normalized maximum likelihood instead of the plug-in maximum likelihood estimator that tends to overestimate dependencies. On top of that, we show that CMI can be consistently estimated for discrete-continuous mixture random variables by simply discretizing the continuous parts of each variable. Last, we consider the problem of distinguishing the two Markov equivalent graphs X to Y and Y to X, which is a necessary step towards discovering all edge directions. To solve this problem, it is inevitable to make assumptions about the generating mechanism. We build upon the idea which states that the cause is algorithmically independent of its mechanism. We propose two methods to approximate this postulate via the Minimum Description Length (MDL) principle: one for univariate numeric data and one for multivariate mixed-type data. Finally, we combine insights from our MDL-based approach and regression-based methods with strong guarantees and show we can identify cause and effect via L0-regularized regression

    Quantifying and mitigating privacy risks in biomedical data

    Get PDF
    Die stetig sinkenden Kosten für molekulares Profiling haben der Biomedizin zahlreiche neue Arten biomedizinischer Daten geliefert und den Durchbruch für eine präzisere und personalisierte Medizin ermöglicht. Die Veröffentlichung dieser inhärent hochsensiblen und miteinander verbundenen Daten stellt jedoch eine neue Bedrohung für unsere Privatsphäre dar. Während die IT-Sicherheitsforschung sich bisher hauptsächlich auf die Auswirkung genetischer Daten auf die Privatsphäre konzentriert hat, wurden die vielfältigen Risiken durch andere Arten biomedizinischer Daten – epigenetischer Daten im Speziellen – größtenteils außer Acht gelassen. Diese Dissertation stellt Verfahren zur Messung und Abwehr solcher Privatsphärerisiken vor. Neben dem Genom konzentrieren wir uns auf zwei der wichtigsten gesundheitsrelevanten epigenetischen Elemente: microRNAs und DNA-Methylierung. Wir quantifizieren die Privatsphäre für die folgenden realistischen Angriffe: (1) Verknüpfung von Profilen über die Zeit, Verknüpfung verschiedener Datentypen und verwandter Personen, (2) Feststellung der Studienteilnahme und (3) Inferenz von Attributen. Unsere Resultate bekräftigen, dass die Privatsphärerisiken solcher Daten ernst genommen werden müssen. Zudem präsentieren und evaluieren wir Lösungen zum Schutz der Privatsphäre. Sie reichen von der Anwendung von Differential Privacy unter Berücksichtigung des Nutzwertes bis zu kryptographischen Protokollen zur sicheren Auswertung eines Random Forests.The decreasing costs of molecular profiling have fueled the biomedical research community with a plethora of new types of biomedical data, allowing for a breakthrough towards a more precise and personalized medicine. However, the release of these intrinsically highly sensitive, interdependent data poses a new severe privacy threat. So far, the security community has mostly focused on privacy risks arising from genomic data. However, the manifold privacy risks stemming from other types of biomedical data – and epigenetic data in particular – have been largely overlooked. In this thesis, we provide means to quantify and protect the privacy of individuals’ biomedical data. Besides the genome, we specifically focus on two of the most important epigenetic elements influencing human health: microRNAs and DNA methylation. We quantify the privacy for multiple realistic attack scenarios, namely, (1) linkability attacks along the temporal dimension, between different types of data, and between related individuals, (2) membership attacks, and (3) inference attacks. Our results underline that the privacy risks inherent to biomedical data have to be taken seriously. Moreover, we present and evaluate solutions to preserve the privacy of individuals. Our mitigation techniques stretch from the differentially private release of epigenetic data, considering its utility, up to cryptographic constructions to securely, and privately evaluate a random forest on a patient’s data

    Distributed Learning, Prediction and Detection in Probabilistic Graphs.

    Full text link
    Critical to high-dimensional statistical estimation is to exploit the structure in the data distribution. Probabilistic graphical models provide an efficient framework for representing complex joint distributions of random variables through their conditional dependency graph, and can be adapted to many high-dimensional machine learning applications. This dissertation develops the probabilistic graphical modeling technique for three statistical estimation problems arising in real-world applications: distributed and parallel learning in networks, missing-value prediction in recommender systems, and emerging topic detection in text corpora. The common theme behind all proposed methods is a combination of parsimonious representation of uncertainties in the data, optimization surrogate that leads to computationally efficient algorithms, and fundamental limits of estimation performance in high dimension. More specifically, the dissertation makes the following theoretical contributions: (1) We propose a distributed and parallel framework for learning the parameters in Gaussian graphical models that is free of iterative global message passing. The proposed distributed estimator is shown to be asymptotically consistent, improve with increasing local neighborhood sizes, and have a high-dimensional error rate comparable to that of the centralized maximum likelihood estimator. (2) We present a family of latent variable Gaussian graphical models whose marginal precision matrix has a “low-rank plus sparse” structure. Under mild conditions, we analyze the high-dimensional parameter error bounds for learning this family of models using regularized maximum likelihood estimation. (3) We consider a hypothesis testing framework for detecting emerging topics in topic models, and propose a novel surrogate test statistic for the standard likelihood ratio. By leveraging the theory of empirical processes, we prove asymptotic consistency for the proposed test and provide guarantees of the detection performance.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/110499/1/mengzs_1.pd
    • …
    corecore