240 research outputs found

    An Efficient Coding Hypothesis Links Sparsity and Selectivity of Neural Responses

    Get PDF
    To what extent are sensory responses in the brain compatible with first-order principles? The efficient coding hypothesis projects that neurons use as few spikes as possible to faithfully represent natural stimuli. However, many sparsely firing neurons in higher brain areas seem to violate this hypothesis in that they respond more to familiar stimuli than to nonfamiliar stimuli. We reconcile this discrepancy by showing that efficient sensory responses give rise to stimulus selectivity that depends on the stimulus-independent firing threshold and the balance between excitatory and inhibitory inputs. We construct a cost function that enforces minimal firing rates in model neurons by linearly punishing suprathreshold synaptic currents. By contrast, subthreshold currents are punished quadratically, which allows us to optimally reconstruct sensory inputs from elicited responses. We train synaptic currents on many renditions of a particular bird's own song (BOS) and few renditions of conspecific birds' songs (CONs). During training, model neurons develop a response selectivity with complex dependence on the firing threshold. At low thresholds, they fire densely and prefer CON and the reverse BOS (REV) over BOS. However, at high thresholds or when hyperpolarized, they fire sparsely and prefer BOS over REV and over CON. Based on this selectivity reversal, our model suggests that preference for a highly familiar stimulus corresponds to a high-threshold or strong-inhibition regime of an efficient coding strategy. Our findings apply to songbird mirror neurons, and in general, they suggest that the brain may be endowed with simple mechanisms to rapidly change selectivity of neural responses to focus sensory processing on either familiar or nonfamiliar stimuli. In summary, we find support for the efficient coding hypothesis and provide new insights into the interplay between the sparsity and selectivity of neural responses

    Sparse coding on the spot: Spontaneous retinal waves suffice for orientation selectivity

    Get PDF
    Ohshiro, Hussain, and Weliky (2011) recently showed that ferrets reared with exposure to flickering spot stimuli, in the absence of oriented visual experience, develop oriented receptive fields. They interpreted this as refutation of efficient coding models, which require oriented input in order to develop oriented receptive fields. Here we show that these data are compatible with the efficient coding hypothesis if the influence of spontaneous retinal waves is considered. We demonstrate that independent component analysis learns predominantly oriented receptive fields when trained on a mixture of spot stimuli and spontaneous retinal waves. Further, we show that the efficient coding hypothesis provides a compelling explanation for the contrast between the lack of receptive field changes seen in animals reared with spot stimuli and the significant cortical reorganisation observed in stripe-reared animals

    Closed-loop estimation of retinal network sensitivity reveals signature of efficient coding

    Full text link
    According to the theory of efficient coding, sensory systems are adapted to represent natural scenes with high fidelity and at minimal metabolic cost. Testing this hypothesis for sensory structures performing non-linear computations on high dimensional stimuli is still an open challenge. Here we develop a method to characterize the sensitivity of the retinal network to perturbations of a stimulus. Using closed-loop experiments, we explore selectively the space of possible perturbations around a given stimulus. We then show that the response of the retinal population to these small perturbations can be described by a local linear model. Using this model, we computed the sensitivity of the neural response to arbitrary temporal perturbations of the stimulus, and found a peak in the sensitivity as a function of the frequency of the perturbations. Based on a minimal theory of sensory processing, we argue that this peak is set to maximize information transmission. Our approach is relevant to testing the efficient coding hypothesis locally in any context where no reliable encoding model is known

    Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Full text link
    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.Comment: 22 pages, 9 figure

    Information Flow in Color Appearance Neural Networks

    Full text link
    Color Appearance Models are biological networks that consist of a cascade of linear+nonlinear layers that modify the linear measurements at the retinal photo-receptors leading to an internal (nonlinear) representation of color that correlates with psychophysical experience. The basic layers of these networks include: (1) chromatic adaptation (normalization of the mean and covariance of the color manifold), (2) change to opponent color channels (PCA-like rotation in the color space), and (3) saturating nonlinearities to get perceptually Euclidean color representations (similar to dimensionwise equalization). The Efficient Coding Hypothesis argues that these transforms should emerge from information-theoretic goals. In case this hypothesis holds in color vision, the question is, what is the coding gain due to the different layers of the color appearance networks? In this work, a representative family of Color Appearance Models is analyzed in terms of how the redundancy among the chromatic components is modified along the network and how much information is transferred from the input data to the noisy response. The proposed analysis is done using data and methods that were not available before: (1) new colorimetrically calibrated scenes in different CIE illuminations for proper evaluation of chromatic adaptation, and (2) new statistical tools to estimate (multivariate) information-theoretic quantities between multidimensional sets based on Gaussianization. Results confirm that the Efficient Coding Hypothesis holds for current color vision models, and identify the psychophysical mechanisms critically responsible for gains in information transference: opponent channels and their nonlinear nature are more important than chromatic adaptation at the retina

    Learning efficient image representations: Connections between statistics and neuroscience

    Get PDF
    This thesis summarizes different works developed in the framework of analyzing the relation between image processing, statistics and neuroscience. These relations are analyzed from the efficient coding hypothesis point of view (H. Barlow [1961] and Attneave [1954]). This hypothesis suggests that the human visual system has been adapted during the ages in order to process the visual information in an efficient way, i.e. taking advantage of the statistical regularities of the visual world. Under this classical idea different works in different directions are developed. One direction is analyzing the statistical properties of a revisited, extended and fitted classical model of the human visual system. No statistical information is used in the model. Results show that this model obtains a representation with good statistical properties, which is a new evidence in favor of the efficient coding hypothesis. From the statistical point of view, different methods are proposed and optimized using natural images. The models obtained using these statistical methods show similar behavior to the human visual system, both in the spatial and color dimensions, which are also new evidences of the efficient coding hypothesis. Applications in image processing are an important part of the Thesis. Statistical and neuroscience based methods are employed to develop a wide set of image processing algorithms. Results of these methods in denoising, classification, synthesis and quality assessment are comparable to some of the most successful current methods
    • 

    corecore