4,573,390 research outputs found

    A Bivariate Measure of Redundant Information

    Get PDF
    We define a measure of redundant information based on projections in the space of probability distributions. Redundant information between random variables is information that is shared between those variables. But in contrast to mutual information, redundant information denotes information that is shared about the outcome of a third variable. Formalizing this concept, and being able to measure it, is required for the non-negative decomposition of mutual information into redundant and synergistic information. Previous attempts to formalize redundant or synergistic information struggle to capture some desired properties. We introduce a new formalism for redundant information and prove that it satisfies all the properties necessary outlined in earlier work, as well as an additional criterion that we propose to be necessary to capture redundancy. We also demonstrate the behaviour of this new measure for several examples, compare it to previous measures and apply it to the decomposition of transfer entropy.Comment: 16 pages, 15 figures, 1 table, added citation to Griffith et al 2012, Maurer et al 199

    Representational information: a new general notion and measure\ud of information

    Get PDF
    In what follows, we introduce the notion of representational information (information conveyed by sets of dimensionally deļ¬ned objects about their superset of origin) as well as an\ud original deterministic mathematical framework for its analysis and measurement. The framework, based in part on categorical invariance theory [Vigo, 2009], uniļ¬es three key constructsof universal science ā€“ invariance, complexity, and information. From this uniļ¬cation we deļ¬ne the amount of information that a well-deļ¬ned set of objects R carries about its ļ¬nite superset of origin S, as the rate of change in the structural complexity of S (as determined by its degree of categorical invariance), whenever the objects in R are removed from the set S. The measure captures deterministically the signiļ¬cant role that context and category structure play in determining the relative quantity and quality of subjective information conveyed by particular objects in multi-object stimuli

    A measure of statistical complexity based on predictive information

    Full text link
    We introduce an information theoretic measure of statistical structure, called 'binding information', for sets of random variables, and compare it with several previously proposed measures including excess entropy, Bialek et al.'s predictive information, and the multi-information. We derive some of the properties of the binding information, particularly in relation to the multi-information, and show that, for finite sets of binary random variables, the processes which maximises binding information are the 'parity' processes. Finally we discuss some of the implications this has for the use of the binding information as a measure of complexity.Comment: 4 pages, 3 figure

    Information entropy as a measure of the quality of a nuclear density distribution

    Get PDF
    The information entropy of a nuclear density distribution is calculated for a number of nuclei. Various phenomenological models for the density distribution using different geometry are employed. Nuclear densities calculated within various microscopic mean field approaches are also employed. It turns out that the entropy increases on going from crude phenomenological models to more sophisticated (microscopic) ones. It is concluded that the larger the information entropy, the better the quality of the nuclear density distribution. An alternative approach is also examined: the net information content i.e. the sum of information entropies in position and momentum space Sr+SkS_{r}+S_{k}. It is indicated that Sr+SkS_{r}+S_{k} is a maximum, when the best fit to experimental data of the density and momentum distributions is attained.Comment: 12 pages, LaTex, no figures, Int. J. of Mod. Phys. E in pres

    Sensory capacity: an information theoretical measure of the performance of a sensor

    Full text link
    For a general sensory system following an external stochastic signal, we introduce the sensory capacity. This quantity characterizes the performance of a sensor: sensory capacity is maximal if the instantaneous state of the sensor has as much information about a signal as the whole time-series of the sensor. We show that adding a memory to the sensor increases the sensory capacity. This increase quantifies the improvement of the sensor with the addition of the memory. Our results are obtained with the framework of stochastic thermodynamics of bipartite systems, which allows for the definition of an efficiency that relates the rate with which the sensor learns about the signal with the energy dissipated by the sensor, which is given by the thermodynamic entropy production. We demonstrate a general tradeoff between sensory capacity and efficiency: if the sensory capacity is equal to its maximum 1, then the efficiency must be less than 1/2. As a physical realization of a sensor we consider a two component cellular network estimating a fluctuating external ligand concentration as signal. This model leads to coupled linear Langevin equations that allow us to obtain explicit analytical results.Comment: 15 pages, 7 figure

    A new information theoretical measure of global and local spatial association

    Get PDF
    In this paper a new measure of spatial association, the S statistics, is developed. The proposed measure is based on information theory by defining a spatially weighted information measure (entropy measure) that takes the spatial configuration into account. The proposed S-statistics has an intuitive interpretation, and furthermore fulfills properties that are expected from an entropy measure. Moreover, the S statistics is a global measure of spatial association that can be decomposed into Local Indicators of Spatial Association (LISA). This new measure is tested using a dataset of employment in the culture sector that was attached to the wards over Stockholm County and later compared with the results from current global and local measures of spatial association. It is shown that the proposed S statistics share many properties with Moran's I and Getis-Ord Gi statistics. The local Si statistics showed significant spatial association similar to the Gi statistic, but has the advantage of being possible to aggregate to a global measure of spatial association. The statistics can also be extended to bivariate distributions. It is shown that the commonly used Bayesian empirical approach can be interpreted as a Kullback-Leibler divergence measure. An advantage of S-statistics is that this measure select only the most robust clusters, eliminating the contribution of smaller ones composed by few observations and that may inflate the global measure.Global and local measure of spatial association, LISA, S-statistics, Gi statistics, Moran's I, Kullback-Leibler divergence,

    Justification of Logarithmic Loss via the Benefit of Side Information

    Full text link
    We consider a natural measure of relevance: the reduction in optimal prediction risk in the presence of side information. For any given loss function, this relevance measure captures the benefit of side information for performing inference on a random variable under this loss function. When such a measure satisfies a natural data processing property, and the random variable of interest has alphabet size greater than two, we show that it is uniquely characterized by the mutual information, and the corresponding loss function coincides with logarithmic loss. In doing so, our work provides a new characterization of mutual information, and justifies its use as a measure of relevance. When the alphabet is binary, we characterize the only admissible forms the measure of relevance can assume while obeying the specified data processing property. Our results naturally extend to measuring causal influence between stochastic processes, where we unify different causal-inference measures in the literature as instantiations of directed information
    • ā€¦
    corecore