45 research outputs found

    On estimation of entropy and mutual information of continuous distributions

    Get PDF
    Mutual information is used in a procedure to estimate time-delays between recordings of electroencephalogram (EEG) signals originating from epileptic animals and patients. We present a simple and reliable histogram-based method to estimate mutual information. The accuracies of this mutual information estimator and of a similar entropy estimator are discussed. The bias and variance calculations presented can also be applied to discrete valued systems. Finally, we present some simulation results, which are compared with earlier work

    LeakWatch: Estimating Information Leakage from Java Programs

    Get PDF
    Abstract. Programs that process secret data may inadvertently reveal information about those secrets in their publicly-observable output. This paper presents LeakWatch, a quantitative information leakage analysis tool for the Java programming language; it is based on a flexible “point-to-point ” information leakage model, where secret and publiclyobservable data may occur at any time during a program’s execution. LeakWatch repeatedly executes a Java program containing both secret and publicly-observable data and uses robust statistical techniques to provide estimates, with confidence intervals, for min-entropy leakage (using a new theoretical result presented in this paper) and mutual information. We demonstrate how LeakWatch can be used to estimate the size of information leaks in a range of real-world Java programs

    Inevitable Evolutionary Temporal Elements in Neural Processing: A Study Based on Evolutionary Simulations

    Get PDF
    Recent studies have suggested that some neural computational mechanisms are based on the fine temporal structure of spiking activity. However, less effort has been devoted to investigating the evolutionary aspects of such mechanisms. In this paper we explore the issue of temporal neural computation from an evolutionary point of view, using a genetic simulation of the evolutionary development of neural systems. We evolve neural systems in an environment with selective pressure based on mate finding, and examine the temporal aspects of the evolved systems. In repeating evolutionary sessions, there was a significant increase during evolution in the mutual information between the evolved agent's temporal neural representation and the external environment. In ten different simulated evolutionary sessions, there was an increased effect of time -related neural ablations on the agents' fitness. These results suggest that in some fitness landscapes the emergence of temporal elements in neural computation is almost inevitable. Future research using similar evolutionary simulations may shed new light on various biological mechanisms

    Quality Coding by Neural Populations in the Early Olfactory Pathway: Analysis Using Information Theory and Lessons for Artificial Olfactory Systems

    Get PDF
    In this article, we analyze the ability of the early olfactory system to detect and discriminate different odors by means of information theory measurements applied to olfactory bulb activity images. We have studied the role that the diversity and number of receptor neuron types play in encoding chemical information. Our results show that the olfactory receptors of the biological system are low correlated and present good coverage of the input space. The coding capacity of ensembles of olfactory receptors with the same receptive range is maximized when the receptors cover half of the odor input space - a configuration that corresponds to receptors that are not particularly selective. However, the ensemble’s performance slightly increases when mixing uncorrelated receptors of different receptive ranges. Our results confirm that the low correlation between sensors could be more significant than the sensor selectivity for general purpose chemo-sensory systems, whether these are biological or biomimetic

    Shaping Embodied Neural Networks for Adaptive Goal-directed Behavior

    Get PDF
    The acts of learning and memory are thought to emerge from the modifications of synaptic connections between neurons, as guided by sensory feedback during behavior. However, much is unknown about how such synaptic processes can sculpt and are sculpted by neuronal population dynamics and an interaction with the environment. Here, we embodied a simulated network, inspired by dissociated cortical neuronal cultures, with an artificial animal (an animat) through a sensory-motor loop consisting of structured stimuli, detailed activity metrics incorporating spatial information, and an adaptive training algorithm that takes advantage of spike timing dependent plasticity. By using our design, we demonstrated that the network was capable of learning associations between multiple sensory inputs and motor outputs, and the animat was able to adapt to a new sensory mapping to restore its goal behavior: move toward and stay within a user-defined area. We further showed that successful learning required proper selections of stimuli to encode sensory inputs and a variety of training stimuli with adaptive selection contingent on the animat's behavior. We also found that an individual network had the flexibility to achieve different multi-task goals, and the same goal behavior could be exhibited with different sets of network synaptic strengths. While lacking the characteristic layered structure of in vivo cortical tissue, the biologically inspired simulated networks could tune their activity in behaviorally relevant manners, demonstrating that leaky integrate-and-fire neural networks have an innate ability to process information. This closed-loop hybrid system is a useful tool to study the network properties intermediating synaptic plasticity and behavioral adaptation. The training algorithm provides a stepping stone towards designing future control systems, whether with artificial neural networks or biological animats themselves

    Inference of financial networks using the normalised mutual information rate

    Get PDF
    In this paper we study data from financial markets using an information theory tool that we call the normalised Mutual Information Rate and show how to use it to infer the underlying network structure of interrelations in foreign currency exchange rates and stock indices of 14 countries world-wide and the European Union. We first present the mathematical method and discuss about its computational aspects, and then apply it to artificial data from chaotic dynamics and to correlated random variates. Next, we apply the method to infer the network structure of the financial data. Particularly, we study and reveal the interrelations among the various foreign currency exchange rates and stock indices in two separate networks for which we also perform an analysis to identify their structural properties. Our results show that both are small-world networks sharing similar properties but also having distinct differences in terms of assortativity. Finally, the consistent relationships depicted among the 15 economies are further supported by a discussion from the economics view point

    Independent EEG Sources Are Dipolar

    Get PDF
    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison)

    Hybrid Statistical Estimation of Mutual Information for Quantifying Information Flow

    Get PDF
    Analysis of a probabilistic system often requires to learn the joint probability distribution of its random variables. The computation of the exact distribution is usually an exhaustive precise analysis on all executions of the system. To avoid the high computational cost of such an exhaustive search, statistical analysis has been studied to efficiently obtain approximate estimates by analyzing only a small but representative subset of the system's behavior. In this paper we propose a hybrid statistical estimation method that combines precise and statistical analyses to estimate mutual information and its confidence interval. We show how to combine the analyses on different components of the system with different precision to obtain an estimate for the whole system. The new method performs weighted statistical analysis with different sample sizes over different components and dynamically finds their optimal sample sizes. Moreover it can reduce sample sizes by using prior knowledge about systems and a new abstraction-then-sampling technique based on qualitative analysis. We show the new method outperforms the state of the art in quantifying information leakage

    the

    No full text
    the convergence of the iterative solution o

    A statistic to estimate the variance of the histogram-based mutual information estimator based on dependent pairs of observations

    No full text
    In the case of two signals with independent pairs of observations (x(n),y(n)) a statistic to estimate the variance of the histogram based mutual information estimator has been derived earlier. We present such a statistic for dependent pairs. To derive this statistic it is necessary to avail of a reliable statistic to estimate the variance of the sample mean in case of dependent observations. We derive and discuss this statistic and a statistic to estimate the variance of the mutual information estimator. These statistics are validated by simulations. (C) 1999 Elsevier Science B.V. All rights reserved
    corecore