7 research outputs found

    Frequency-Invariant Representation of Interaural Time Differences in Mammals

    Get PDF
    Interaural time differences (ITDs) are the major cue for localizing low-frequency sounds. The activity of neuronal populations in the brainstem encodes ITDs with an exquisite temporal acuity of about . The response of single neurons, however, also changes with other stimulus properties like the spectral composition of sound. The influence of stimulus frequency is very different across neurons and thus it is unclear how ITDs are encoded independently of stimulus frequency by populations of neurons. Here we fitted a statistical model to single-cell rate responses of the dorsal nucleus of the lateral lemniscus. The model was used to evaluate the impact of single-cell response characteristics on the frequency-invariant mutual information between rate response and ITD. We found a rough correspondence between the measured cell characteristics and those predicted by computing mutual information. Furthermore, we studied two readout mechanisms, a linear classifier and a two-channel rate difference decoder. The latter turned out to be better suited to decode the population patterns obtained from the fitted model

    Decoding neural responses to temporal cues for sound localization

    Get PDF
    The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001

    A novel concept for dynamic adjustment of auditory space

    Get PDF
    Traditionally, the auditory system is thought to serve reliable sound localization. Stimulus-history driven feedback circuits in the early binaural pathway, however, contradict this canonical concept and raise questions about their functional significance. Here we show that stimulus-history dependent changes in absolute space perception are poorly captured by the traditional labeled-line and hemispheric-difference models of auditory space coding. We therefore developed a new decoding model incorporating recent electrophysiological findings in which sound location is initially computed in both brain hemispheres independently and combined to yield a hemispherically balanced code. This model closely captures the observed absolute localization errors caused by stimulus history, and furthermore predicts a selective dilation and compression of perceptional space. These model predictions are confirmed by improvement and degradation of spatial resolution in human listeners. Thus, dynamic perception of auditory space facilitates focal sound source segregation at the expense of absolute sound localization, questioning existing concepts of spatial hearing

    Neurons in primary auditory cortex represent sound source location in a cue-invariant manner

    Get PDF
    Auditory cortex is required for sound localisation, but how neural firing in auditory cortex underlies our perception of sound sources in space remains unclear. Specifically, whether neurons in auditory cortex represent spatial cues or an integrated representation of auditory space across cues is not known. Here, we measured the spatial receptive fields of neurons in primary auditory cortex (A1) while ferrets performed a relative localisation task. Manipulating the availability of binaural and spectral localisation cues had little impact on ferrets’ performance, or on neural spatial tuning. A subpopulation of neurons encoded spatial position consistently across localisation cue type. Furthermore, neural firing pattern decoders outperformed two-channel model decoders using population activity. Together, these observations suggest that A1 encodes the location of sound sources, as opposed to spatial cue values

    The role of spatial cues for processing speech in noise

    Get PDF
    How can we understand speech in difficult listening conditions? This question, centered on the ‘cocktail party problem’, has been studied for decades with psychophysical, physiological and modelling studies, but the answer remains elusive. In the cochlea, sounds are processed through a filter bank which separates them in frequency bands that are then sensed through different sensory neurons. All the sounds coming from a single source must be combined together again in the brain to create a unified speech percept. One of the strategies to achieve this grouping is to use common sound source location. The location of sound sources in the frequency range of human speech in the azimuthal plane is mainly perceived through interaural time differences (ITDs). We studied the mechanisms of ITD processing by comparing vowel discrimination performance in noise with coherent or incoherent ITDs across auditory filters. We showed that coherent ITD cues within one auditory filter were necessary for human subjects to take advantage of spatial unmasking, but that one sound source could have different ITDs across auditory filters. We showed that these psychophysical results are best represented in the gerbil inferior colliculus when using large neuronal populations optimized for natural spatial unmasking to discriminate the vowels in all the spatial conditions. Our results establish a parallel between human behavior and neuronal computations in the IC, highlighting the potential importance of the IC for discriminating sounds in complex spatial environments
    corecore