513 research outputs found

    Path mutual information for a class of biochemical reaction networks

    Full text link
    Living cells encode and transmit information in the temporal dynamics of biochemical components. Gaining a detailed understanding of the input-output relationship in biological systems therefore requires quantitative measures that capture the interdependence between complete time trajectories of biochemical components. Mutual information provides such a measure but its calculation in the context of stochastic reaction networks is associated with mathematical challenges. Here we show how to estimate the mutual information between complete paths of two molecular species that interact with each other through biochemical reactions. We demonstrate our approach using three simple case studies.Comment: 6 pages, 2 figure

    Investigating Information Flows in Spiking Neural Networks With High Fidelity

    Get PDF
    The brains of many organisms are capable of a wide variety of complex computations. This capability must be undergirded by a more general purpose computational capacity. The exact nature of this capacity, how it is distributed across the brains of organisms and how it arises throughout the course of development is an open topic of scientific investigation. Individual neurons are widely considered to be the fundamental computational units of brains. Moreover, the finest scale at which large scale recordings of brain activity can be performed is the spiking activity of neurons and our ability to perform these recordings over large numbers of neurons and with fine spatial resolution is increasing rapidly. This makes the spiking activity of individual neurons a highly attractive data modality on which to study neural computation. The framework of information dynamics has proven to be a successful approach towards interrogating the capacity for general purpose computation. It does this by revealing the atomic information processing operations of information storage, transfer and modification. Unfortunately, the study of information flows and other information processing operations from the spiking activity of neurons has been severely hindered by the lack of effective tools for estimating these quantities on this data modality. This thesis remedies this situation by presenting an estimator for information flows, as measured by Transfer Entropy (TE), that operates in continuous time on event-based data such as spike trains. Unlike the previous approach to the estimation of this quantity, which discretised the process into time bins, this estimator operates on the raw inter-spike intervals. It is demonstrated to be far superior to the previous discrete-time approach in terms of consistency, rate of convergence and bias. Most importantly, unlike the discrete-time approach, which requires a hard tradeoff between capturing fine temporal precision or history effects occurring over reasonable time intervals, this estimator can capture history effects occurring over relatively large intervals without any loss of temporal precision. This estimator is applied to developing dissociated cultures of cortical rat neurons, therefore providing the first high-fidelity study of information flows on spiking data. It is found that the spatial structure of the flows locks in to a significant extent. at the point of their emergence and that certain nodes occupy specialised computational roles as either transmitters, receivers or mediators of information flow. Moreover, these roles are also found to lock in early. In order to fully understand the structure of neural information flows, however, we are required to go beyond pairwise interactions, and indeed multivariate information flows have become an important tool in the inference of effective networks from neuroscience data. These are directed networks where each node is connected to a minimal set of sources which maximally reduce the uncertainty in its present state. However, the application of multivariate information flows to the inference of effective networks from spiking data has been hampered by the above-mentioned issues with preexisting estimation techniques. Here, a greedy algorithm which iteratively builds a set of parents for each target node using multivariate transfer entropies, and which has already been well validated in the context of traditional discretely sampled time series, is adapted to use in conjunction with the newly developed estimator for event-based data. The combination of the greedy algorithm and continuous-time estimator is then validated on simulated examples for which the ground truth is known. The new capabilities in the estimation of information flows and the inference of effective networks on event-based data presented in this work represent a very substantial step forward in our ability to perform these analyses on the ever growing set of high resolution, large scale recordings of interacting neurons. As such, this work promises to enable substantial quantitative insights in the future regarding how neurons interact, how they process information, and how this changes under different conditions such as disease

    A Development of Transfer Entropy in Continuous-Time

    Get PDF
    The quantification of causal relationships between time series data is a fundamen- tal problem in fields including neuroscience, social networking, finance, and machine learning. Amongst the various means of measuring such relationships, information- theoretic approaches are a rapidly developing area in concert with other methods. One such approach is to make use of the notion of transfer entropy (TE). Broadly speaking, TE is an information-theoretic measure of information transfer between two stochastic processes. Schreiber’s 2001 definition of TE characterizes information transfer as an informational divergence between conditional probability mass func- tions. The original definition is native to discrete-time stochastic processes whose comprising random variables have a discrete state space. While this formalism is applicable to a wealth of practical scenarios, there is a wide range of circumstances under which the processes of interest are indexed over an uncountable set (usually an interval). One can generalize Schreiber’s definition to handle the case when the ran- dom variables comprising the processes have state space R via the Radon-Nikodym Theorem, as demonstrated by Kaiser and Schreiber in 2002. A rigorous treatment of TE among processes that are either indexed over an uncountable set or do not have R as the state space of their comprising random variables has been lacking in the literature. A common workaround to this theoretical deficiency is to discretize time to create new stochastic processes and then apply Schreiber’s definition to these resulting processes. These time discretization workarounds have been widely used as a means to intuitively capture the notion of information transfer between pro- cesses in continuous-time, that is, those which are indexed by an interval. These approaches, while effective and practicable, do not provide a native definition of TE in continuous-time. We generalize Schreiber’s definition to the case when the processes are comprised of random variables with a Polish state space and generalize further to the case when the indexing set is an interval via projective limits. Our main result, Theorem 5, is a rigorous recasting of a claim made by Spinney, Propenko, and Lizier in 2016, which characterizes when continuous-time TE can be obtained as a limit of discrete-time TE. In many applications, the instantaneous transfer entropy or transfer entropy rate is of particular interest. Using our definitions, we define the transfer entropy rate as the right-hand derivative of the expected pathwise transfer entropy (EPT) defined in Section 2.3. To this end, we use our main results to prove some of its properties, including a rigorous version of a result stated without proof in work by Spinney, Propenko, and Lizier regarding a particularly well-behaved class of stationary pro- cesses. We then consider time-homogeneous Markov jump processes and provide an analytic form of the EPT via a Girsanov formula, and finally, using a corollary of our main result, we demonstrate how to apply our main result to a lagged Poisson point process, providing a concrete example of two processes to which our aforementioned results apply

    PrivateSNN: Privacy-Preserving Spiking Neural Networks

    Full text link
    How can we bring both privacy and energy-efficiency to a neural system? In this paper, we propose PrivateSNN, which aims to build low-power Spiking Neural Networks (SNNs) from a pre-trained ANN model without leaking sensitive information contained in a dataset. Here, we tackle two types of leakage problems: 1) Data leakage is caused when the networks access real training data during an ANN-SNN conversion process. 2) Class leakage is caused when class-related features can be reconstructed from network parameters. In order to address the data leakage issue, we generate synthetic images from the pre-trained ANNs and convert ANNs to SNNs using the generated images. However, converted SNNs remain vulnerable to class leakage since the weight parameters have the same (or scaled) value with respect to ANN parameters. Therefore, we encrypt SNN weights by training SNNs with a temporal spike-based learning rule. Updating weight parameters with temporal data makes SNNs difficult to be interpreted in the spatial domain. We observe that the encrypted PrivateSNN eliminates data and class leakage issues with a slight performance drop (less than ~2) and significant energy-efficiency gain (about 55x) compared to the standard ANN. We conduct extensive experiments on various datasets including CIFAR10, CIFAR100, and TinyImageNet, highlighting the importance of privacy-preserving SNN training.Comment: Accepted to AAAI202
    • …
    corecore