32 research outputs found
Neural Decision Boundaries for Maximal Information Transmission
We consider here how to separate multidimensional signals into two
categories, such that the binary decision transmits the maximum possible
information transmitted about those signals. Our motivation comes from the
nervous system, where neurons process multidimensional signals into a binary
sequence of responses (spikes). In a small noise limit, we derive a general
equation for the decision boundary that locally relates its curvature to the
probability distribution of inputs. We show that for Gaussian inputs the
optimal boundaries are planar, but for non-Gaussian inputs the curvature is
nonzero. As an example, we consider exponentially distributed inputs, which are
known to approximate a variety of signals from natural environment.Comment: 5 pages, 3 figure
Efficient Temporal Processing of Naturalistic Sounds
In this study, we investigate the ability of the mammalian auditory pathway to adapt its strategy for temporal processing under natural stimulus conditions. We derive temporal receptive fields from the responses of neurons in the inferior colliculus to vocalization stimuli with and without additional ambient noise. We find that the onset of ambient noise evokes a change in receptive field dynamics that corresponds to a change from bandpass to lowpass temporal filtering. We show that these changes occur within a few hundred milliseconds of the onset of the noise and are evident across a range of overall stimulus intensities. Using a simple model, we illustrate how these changes in temporal processing exploit differences in the statistical properties of vocalizations and ambient noises to increase the information in the neural response in a manner consistent with the principles of efficient coding
Emergence of Tuning to Natural Stimulus Statistics along the Central Auditory Pathway
We have previously shown that neurons in primary auditory cortex (A1) of anaesthetized (ketamine/medetomidine) ferrets respond more strongly and reliably to dynamic stimuli whose statistics follow "natural" 1/f dynamics than to stimuli exhibiting pitch and amplitude modulations that are faster (1/f(0.5)) or slower (1/f(2)) than 1/f. To investigate where along the central auditory pathway this 1/f-modulation tuning arises, we have now characterized responses of neurons in the central nucleus of the inferior colliculus (ICC) and the ventral division of the mediate geniculate nucleus of the thalamus (MGV) to 1/f(gamma) distributed stimuli with gamma varying between 0.5 and 2.8. We found that, while the great majority of neurons recorded from the ICC showed a strong preference for the most rapidly varying (1/f(0.5) distributed) stimuli, responses from MGV neurons did not exhibit marked or systematic preferences for any particular gamma exponent. Only in A1 did a majority of neurons respond with higher firing rates to stimuli in which gamma takes values near 1. These results indicate that 1/f tuning emerges at forebrain levels of the ascending auditory pathway
Effects of Noise Bandwidth and Amplitude Modulation on Masking in Frog Auditory Midbrain Neurons
Natural auditory scenes such as frog choruses consist of multiple sound sources (i.e., individual vocalizing males) producing sounds that overlap extensively in time and spectrum, often in the presence of other biotic and abiotic background noise. Detection of a signal in such environments is challenging, but it is facilitated when the noise shares common amplitude modulations across a wide frequency range, due to a phenomenon called comodulation masking release (CMR). Here, we examined how properties of the background noise, such as its bandwidth and amplitude modulation, influence the detection threshold of a target sound (pulsed amplitude modulated tones) by single neurons in the frog auditory midbrain. We found that for both modulated and unmodulated masking noise, masking was generally stronger with increasing bandwidth, but it was weakened for the widest bandwidths. Masking was less for modulated noise than for unmodulated noise for all bandwidths. However, responses were heterogeneous, and only for a subpopulation of neurons the detection of the probe was facilitated when the bandwidth of the modulated masker was increased beyond a certain bandwidth – such neurons might contribute to CMR. We observed evidence that suggests that the dips in the noise amplitude are exploited by TS neurons, and observed strong responses to target signals occurring during such dips. However, the interactions between the probe and masker responses were nonlinear, and other mechanisms, e.g., selective suppression of the response to the noise, may also be involved in the masking release
Estimating Receptive Fields from Responses to Natural Stimuli with Asymmetric Intensity Distributions
The reasons for using natural stimuli to study sensory function are quickly mounting, as recent studies have revealed important differences in neural responses to natural and artificial stimuli. However, natural stimuli typically contain strong correlations and are spherically asymmetric (i.e. stimulus intensities are not symmetrically distributed around the mean), and these statistical complexities can bias receptive field (RF) estimates when standard techniques such as spike-triggered averaging or reverse correlation are used. While a number of approaches have been developed to explicitly correct the bias due to stimulus correlations, there is no complementary technique to correct the bias due to stimulus asymmetries. Here, we develop a method for RF estimation that corrects reverse correlation RF estimates for the spherical asymmetries present in natural stimuli. Using simulated neural responses, we demonstrate how stimulus asymmetries can bias reverse-correlation RF estimates (even for uncorrelated stimuli) and illustrate how this bias can be removed by explicit correction. We demonstrate the utility of the asymmetry correction method under experimental conditions by estimating RFs from the responses of retinal ganglion cells to natural stimuli and using these RFs to predict responses to novel stimuli
A Generalized Linear Model for Estimating Spectrotemporal Receptive Fields from Responses to Natural Sounds
In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cell's input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons
Encoding of Temporal Information by Timing, Rate, and Place in Cat Auditory Cortex
A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1) the event-locked spike-timing precision, 2) the mean firing rate, and 3) the interspike interval (ISI). To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF) to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis
Neural processing of natural sounds
Natural sounds include animal vocalizations, environmental sounds such as wind, water and fire noises and non-vocal sounds made by animals and humans for communication. These natural sounds have characteristic statistical properties that make them perceptually salient and that drive auditory neurons in optimal regimes for information transmission.Recent advances in statistics and computer sciences have allowed neuro-physiologists to extract the stimulus-response function of complex auditory neurons from responses to natural sounds. These studies have shown a hierarchical processing that leads to the neural detection of progressively more complex natural sound features and have demonstrated the importance of the acoustical and behavioral contexts for the neural responses.High-level auditory neurons have shown to be exquisitely selective for conspecific calls. This fine selectivity could play an important role for species recognition, for vocal learning in songbirds and, in the case of the bats, for the processing of the sounds used in echolocation. Research that investigates how communication sounds are categorized into behaviorally meaningful groups (e.g. call types in animals, words in human speech) remains in its infancy.Animals and humans also excel at separating communication sounds from each other and from background noise. Neurons that detect communication calls in noise have been found but the neural computations involved in sound source separation and natural auditory scene analysis remain overall poorly understood. Thus, future auditory research will have to focus not only on how natural sounds are processed by the auditory system but also on the computations that allow for this processing to occur in natural listening situations.The complexity of the computations needed in the natural hearing task might require a high-dimensional representation provided by ensemble of neurons and the use of natural sounds might be the best solution for understanding the ensemble neural code