1,532 research outputs found

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field

    Information capacity of genetic regulatory elements

    Full text link
    Changes in a cell's external or internal conditions are usually reflected in the concentrations of the relevant transcription factors. These proteins in turn modulate the expression levels of the genes under their control and sometimes need to perform non-trivial computations that integrate several inputs and affect multiple genes. At the same time, the activities of the regulated genes would fluctuate even if the inputs were held fixed, as a consequence of the intrinsic noise in the system, and such noise must fundamentally limit the reliability of any genetic computation. Here we use information theory to formalize the notion of information transmission in simple genetic regulatory elements in the presence of physically realistic noise sources. The dependence of this "channel capacity" on noise parameters, cooperativity and cost of making signaling molecules is explored systematically. We find that, at least in principle, capacities higher than one bit should be achievable and that consequently genetic regulation is not limited the use of binary, or "on-off", components.Comment: 17 pages, 9 figure

    Controllability Metrics on Networks with Linear Decision Process-type Interactions and Multiplicative Noise

    Full text link
    This paper aims at the study of controllability properties and induced controllability metrics on complex networks governed by a class of (discrete time) linear decision processes with mul-tiplicative noise. The dynamics are given by a couple consisting of a Markov trend and a linear decision process for which both the "deterministic" and the noise components rely on trend-dependent matrices. We discuss approximate, approximate null and exact null-controllability. Several examples are given to illustrate the links between these concepts and to compare our results with their continuous-time counterpart (given in [16]). We introduce a class of backward stochastic Riccati difference schemes (BSRDS) and study their solvability for particular frameworks. These BSRDS allow one to introduce Gramian-like controllability metrics. As application of these metrics, we propose a minimal intervention-targeted reduction in the study of gene networks

    Robust Control and Hot Spots in Dynamic Spatially Interconnected Systems

    Get PDF
    This paper develops linear quadratic robust control theory for a class of spatially invariant distributed control systems that appear in areas of economics such as New Economic Geography, management of ecological systems, optimal harvesting of spatially mobile species, and the like. Since this class of problems has an infinite dimensional state and control space it would appear analytically intractable. We show that by Fourier transforming the problem, the solution decomposes into a countable number of finite state space robust control problems each of which can be solved by standard methods. We use this convenient property to characterize “hot spots” which are points in the transformed space that correspond to “breakdown” points in conventional finite dimensional robust control, or where instabilities appear or where the value function loses concavity. We apply our methods to a spatial extension of a well known optimal fishing model.Distributed Parameter Systems, Robust Control, Spatial Invariance, Hot Spot, Agglomeration

    Kernel Granger causality and the analysis of dynamical networks

    Get PDF
    We propose a method of analysis of dynamical networks based on a recent measure of Granger causality between time series, based on kernel methods. The generalization of kernel Granger causality to the multivariate case, here presented, shares the following features with the bivariate measures: (i) the nonlinearity of the regression model can be controlled by choosing the kernel function and (ii) the problem of false-causalities, arising as the complexity of the model increases, is addressed by a selection strategy of the eigenvectors of a reduced Gram matrix whose range represents the additional features due to the second time series. Moreover, there is no {\it a priori} assumption that the network must be a directed acyclic graph. We apply the proposed approach to a network of chaotic maps and to a simulated genetic regulatory network: it is shown that the underlying topology of the network can be reconstructed from time series of node's dynamics, provided that a sufficient number of samples is available. Considering a linear dynamical network, built by preferential attachment scheme, we show that for limited data use of bivariate Granger causality is a better choice w.r.t methods using L1L1 minimization. Finally we consider real expression data from HeLa cells, 94 genes and 48 time points. The analysis of static correlations between genes reveals two modules corresponding to well known transcription factors; Granger analysis puts in evidence nineteen causal relationships, all involving genes related to tumor development.Comment: 14 pages, 10 figure

    Synergetic and redundant information flow detected by unnormalized Granger causality: application to resting state fMRI

    Full text link
    Objectives: We develop a framework for the analysis of synergy and redundancy in the pattern of information flow between subsystems of a complex network. Methods: The presence of redundancy and/or synergy in multivariate time series data renders difficult to estimate the neat flow of information from each driver variable to a given target. We show that adopting an unnormalized definition of Granger causality one may put in evidence redundant multiplets of variables influencing the target by maximizing the total Granger causality to a given target, over all the possible partitions of the set of driving variables. Consequently we introduce a pairwise index of synergy which is zero when two independent sources additively influence the future state of the system, differently from previous definitions of synergy. Results: We report the application of the proposed approach to resting state fMRI data from the Human Connectome Project, showing that redundant pairs of regions arise mainly due to space contiguity and interhemispheric symmetry, whilst synergy occurs mainly between non-homologous pairs of regions in opposite hemispheres. Conclusions: Redundancy and synergy, in healthy resting brains, display characteristic patterns, revealed by the proposed approach. Significance: The pairwise synergy index, here introduced, maps the informational character of the system at hand into a weighted complex network: the same approach can be applied to other complex systems whose normal state corresponds to a balance between redundant and synergetic circuits.Comment: 6 figures. arXiv admin note: text overlap with arXiv:1403.515

    Learning stable and predictive structures in kinetic systems: Benefits of a causal approach

    Get PDF
    Learning kinetic systems from data is one of the core challenges in many fields. Identifying stable models is essential for the generalization capabilities of data-driven inference. We introduce a computationally efficient framework, called CausalKinetiX, that identifies structure from discrete time, noisy observations, generated from heterogeneous experiments. The algorithm assumes the existence of an underlying, invariant kinetic model, a key criterion for reproducible research. Results on both simulated and real-world examples suggest that learning the structure of kinetic systems benefits from a causal perspective. The identified variables and models allow for a concise description of the dynamics across multiple experimental settings and can be used for prediction in unseen experiments. We observe significant improvements compared to well established approaches focusing solely on predictive performance, especially for out-of-sample generalization
    • …
    corecore