46 research outputs found

    The subjective metric of remembered colors: A Fisher-information analysis of the geometry of human chromatic memory.

    Get PDF
    In order to explore the metric structure of the space of remembered colors, a computer game was designed, where players with normal color vision had to store a color in memory, and later retrieve it by selecting the best match out of a continuum of alternatives. All tested subjects exhibited evidence of focal colors in their mnemonic strategy. We found no concluding evidence that the focal colors of different players tended to cluster around universal prototypes. Based on the Fisher metric, for each subject we defined a notion of distance in color space that captured the accuracy with which similar colors where discriminated or confounded when stored and retrieved from memory. The notions of distance obtained for different players were remarkably similar. Finally, for each player, we constructed a new color scale, in which colors are memorized and retrieved with uniform accuracy

    Mutual Information of Population Codes and Distance Measures in Probability Space

    Full text link
    We studied the mutual information between a stimulus and a large system consisting of stochastic, statistically independent elements that respond to a stimulus. The Mutual Information (MI) of the system saturates exponentially with system size. A theory of the rate of saturation of the MI is developed. We show that this rate is controlled by a distance function between the response probabilities induced by different stimuli. This function, which we term the {\it Confusion Distance} between two probabilities, is related to the Renyi Ξ±\alpha-Information.Comment: 11 pages, 3 figures, accepted to PR

    A theoretical model of neuronal population coding of stimuli with both continuous and discrete dimensions

    Full text link
    In a recent study the initial rise of the mutual information between the firing rates of N neurons and a set of p discrete stimuli has been analytically evaluated, under the assumption that neurons fire independently of one another to each stimulus and that each conditional distribution of firing rates is gaussian. Yet real stimuli or behavioural correlates are high-dimensional, with both discrete and continuously varying features.Moreover, the gaussian approximation implies negative firing rates, which is biologically implausible. Here, we generalize the analysis to the case where the stimulus or behavioural correlate has both a discrete and a continuous dimension. In the case of large noise we evaluate the mutual information up to the quadratic approximation as a function of population size. Then we consider a more realistic distribution of firing rates, truncated at zero, and we prove that the resulting correction, with respect to the gaussian firing rates, can be expressed simply as a renormalization of the noise parameter. Finally, we demonstrate the effect of averaging the distribution across the discrete dimension, evaluating the mutual information only with respect to the continuously varying correlate.Comment: 20 pages, 10 figure

    Replica symmetric evaluation of the information transfer in a two-layer network in presence of continuous+discrete stimuli

    Full text link
    In a previous report we have evaluated analytically the mutual information between the firing rates of N independent units and a set of multi-dimensional continuous+discrete stimuli, for a finite population size and in the limit of large noise. Here, we extend the analysis to the case of two interconnected populations, where input units activate output ones via gaussian weights and a threshold linear transfer function. We evaluate the information carried by a population of M output units, again about continuous+discrete correlates. The mutual information is evaluated solving saddle point equations under the assumption of replica symmetry, a method which, by taking into account only the term linear in N of the input information, is equivalent to assuming the noise to be large. Within this limitation, we analyze the dependence of the information on the ratio M/N, on the selectivity of the input units and on the level of the output noise. We show analytically, and confirm numerically, that in the limit of a linear transfer function and of a small ratio between output and input noise, the output information approaches asymptotically the information carried in input. Finally, we show that the information loss in output does not depend much on the structure of the stimulus, whether purely continuous, purely discrete or mixed, but only on the position of the threshold nonlinearity, and on the ratio between input and output noise.Comment: 19 pages, 4 figure

    Representing Where along with What Information in a Model of a Cortical Patch

    Get PDF
    Behaving in the real world requires flexibly combining and maintaining information about both continuous and discrete variables. In the visual domain, several lines of evidence show that neurons in some cortical networks can simultaneously represent information about the position and identity of objects, and maintain this combined representation when the object is no longer present. The underlying network mechanism for this combined representation is, however, unknown. In this paper, we approach this issue through a theoretical analysis of recurrent networks. We present a model of a cortical network that can retrieve information about the identity of objects from incomplete transient cues, while simultaneously representing their spatial position. Our results show that two factors are important in making this possible: A) a metric organisation of the recurrent connections, and B) a spatially localised change in the linear gain of neurons. Metric connectivity enables a localised retrieval of information about object identity, while gain modulation ensures localisation in the correct position. Importantly, we find that the amount of information that the network can retrieve and retain about identity is strongly affected by the amount of information it maintains about position. This balance can be controlled by global signals that change the neuronal gain. These results show that anatomical and physiological properties, which have long been known to characterise cortical networks, naturally endow them with the ability to maintain a conjunctive representation of the identity and location of objects

    Bursts and Isolated Spikes Code for Opposite Movement Directions in Midbrain Electrosensory Neurons

    Get PDF
    Directional selectivity, in which neurons respond strongly to an object moving in a given direction but weakly or not at all to the same object moving in the opposite direction, is a crucial computation that is thought to provide a neural correlate of motion perception. However, directional selectivity has been traditionally quantified by using the full spike train, which does not take into account particular action potential patterns. We investigated how different action potential patterns, namely bursts (i.e. packets of action potentials followed by quiescence) and isolated spikes, contribute to movement direction coding in a mathematical model of midbrain electrosensory neurons. We found that bursts and isolated spikes could be selectively elicited when the same object moved in opposite directions. In particular, it was possible to find parameter values for which our model neuron did not display directional selectivity when the full spike train was considered but displayed strong directional selectivity when bursts or isolated spikes were instead considered. Further analysis of our model revealed that an intrinsic burst mechanism based on subthreshold T-type calcium channels was not required to observe parameter regimes for which bursts and isolated spikes code for opposite movement directions. However, this burst mechanism enhanced the range of parameter values for which such regimes were observed. Experimental recordings from midbrain neurons confirmed our modeling prediction that bursts and isolated spikes can indeed code for opposite movement directions. Finally, we quantified the performance of a plausible neural circuit and found that it could respond more or less selectively to isolated spikes for a wide range of parameter values when compared with an interspike interval threshold. Our results thus show for the first time that different action potential patterns can differentially encode movement and that traditional measures of directional selectivity need to be revised in such cases

    An Information Theoretic Criterion for Empirical Validation of Time Series Models

    Full text link
    Simulated models suffer intrinsically from validation and comparison problems. The choice of a suitable indicator quantifying the distance between the model and the data is pivotal to model selection. However, how to validate and discriminate between alternative models is still an open problem calling for further investigation, especially in light of the increasing use of simulations in social sciences. In this paper, we present an information theoretic criterion to measure how close models' synthetic output replicates the properties of observable time series without the need to resort to any likelihood function or to impose stationarity requirements. The indicator is sufficiently general to be applied to any kind of model able to simulate or predict time series data, from simple univariate models such as Auto Regressive Moving Average (ARMA) and Markov processes to more complex objects including agent-based or dynamic stochastic general equilibrium models. More specifically, we use a simple function of the L-divergence computed at different block lengths in order to select the model that is better able to reproduce the distributions of time changes in the data. To evaluate the L-divergence, probabilities are estimated across frequencies including a correction for the systematic bias. Finally, using a known data generating process, we show how this indicator can be used to validate and discriminate between different models providing a precise measure of the distance between each of them and the data
    corecore