17,604 research outputs found

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Functional Clustering Drives Encoding Improvement in a Developing Brain Network during Awake Visual Learning

    Get PDF
    Sensory experience drives dramatic structural and functional plasticity in developing neurons. However, for single-neuron plasticity to optimally improve whole-network encoding of sensory information, changes must be coordinated between neurons to ensure a full range of stimuli is efficiently represented. Using two-photon calcium imaging to monitor evoked activity in over 100 neurons simultaneously, we investigate network-level changes in the developing Xenopus laevis tectum during visual training with motion stimuli. Training causes stimulus-specific changes in neuronal responses and interactions, resulting in improved population encoding. This plasticity is spatially structured, increasing tuning curve similarity and interactions among nearby neurons, and decreasing interactions among distant neurons. Training does not improve encoding by single clusters of similarly responding neurons, but improves encoding across clusters, indicating coordinated plasticity across the network. NMDA receptor blockade prevents coordinated plasticity, reduces clustering, and abolishes whole-network encoding improvement. We conclude that NMDA receptors support experience-dependent network self-organization, allowing efficient population coding of a diverse range of stimuli.Canadian Institutes of Health Researc

    Optimal Neural Codes for Natural Stimuli

    Get PDF
    The efficient coding hypothesis assumes that biological sensory systems use neural codes that are optimized to best possibly represent the stimuli that occur in their environment. When formulating such optimization problem of neural codes, two key components must be considered. The first is what types of constraints the neural codes must satisfy? The second is the objective function itself -- what is the goal of the neural codes? We seek to provide a systematic framework to address these types of problem. Previous work often assume one specific set of constraint and analytically or numerically solve the optimization problem. Here we want to put everything in a unified framework and show that these results can be understood from a much more generalized perspective. In particular, we provide analytical solutions for a variety of neural noise models and two types of constraint: a range constraint which specifies the max/min neural activity and a metabolic constraint which upper bounds the mean neural activity. In terms of objective functions, most common models rely on information theoretic measures, whereas alternative formulations propose incorporating downstream decoding performance. We systematically evaluate different optimality criteria based upon the LpL_p reconstruction error of the maximum likelihood decoder. This parametric family of optimal criteria includes special cases such as the information maximization criterion and the mean squared loss minimization of decoding error. We analytically derive the optimal tuning curve of a single neuron in terms of the reconstruction error norm pp to encode natural stimuli with an arbitrary input distribution. Under our framework, we can try to answer questions such as what is the objective function the neural code is actually using? Under what constraints can the predicted results provide a better fit for the actual data? Using different combination of objective function and constraints, we tested our analytical predictions against previously measured characteristics of some early visual systems found in biology. We find solutions under the metabolic constraint and low values of pp provides a better fit for physiology data on early visual perception systems

    From Holistic to Discrete Speech Sounds: The Blind Snow-Flake Maker Hypothesis

    Get PDF
    Sound is a medium used by humans to carry information. The existence of this kind of medium is a pre-requisite for language. It is organized into a code, called speech, which provides a repertoire of forms that is shared in each language community. This code is necessary to support the linguistic interactions that allow humans to communicate. How then may a speech code be formed prior to the existence of linguistic interactions? Moreover, the human speech code is characterized by several properties: speech is digital and compositional (vocalizations are made of units re-used systematically in other syllables); phoneme inventories have precise regularities as well as great diversity in human languages; all the speakers of a language community categorize sounds in the same manner, but each language has its own system of categorization, possibly very different from every other. How can a speech code with these properties form? These are the questions we will approach in the paper. We will study them using the method of the artificial. We will build a society of artificial agents, and study what mechanisms may provide answers. This will not prove directly what mechanisms were used for humans, but rather give ideas about what kind of mechanism may have been used. This allows us to shape the search space of possible answers, in particular by showing what is sufficient and what is not necessary. The mechanism we present is based on a low-level model of sensory-motor interactions. We show that the integration of certain very simple and non language-specific neural devices allows a population of agents to build a speech code that has the properties mentioned above. The originality is that it pre-supposes neither a functional pressure for communication, nor the ability to have coordinated social interactions (they do not play language or imitation games). It relies on the self-organizing properties of a generic coupling between perception and production both within agents, and on the interactions between agents

    Stochasticity from function -- why the Bayesian brain may need no noise

    Get PDF
    An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may, in fact, have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functionally Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial
    corecore