10 research outputs found

    A Population Rate Code of Auditory Space in the Human Cortex

    Get PDF
    BACKGROUND:Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown. METHODOLOGY/PRINCIPAL FINDINGS:Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons. CONCLUSIONS/SIGNIFICANCE:These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates

    Human cortical sensitivity to interaural level differences in low- and high-frequency sounds

    No full text
    Interaural level difference (ILD) is used as a cue in horizontal sound source localization. In free field, the magnitude of ILD depends on frequency: it is more prominent at high than low frequencies. Here, a magnetoencephalography experiment was conducted to test whether the sensitivity of the human auditory cortex to ILD is also frequency-dependent. Robust cortical sensitivity to ILD was found that could not be explained by monaural level effects, but this sensitivity did not differ between low- and high-frequency stimuli. This is consistent with previous psychoacoustical investigations showing that performance in ILD discrimination is not dependent on frequency.Peer reviewe

    Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    No full text
    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments.6 page(s

    Neural realignment of spatially separated sound components

    No full text
    Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.Peer reviewe

    A common periodic representation of interaural time differences in mammalian cortex

    No full text
    Binaural hearing, the ability to detect small differences in the timing and level of sounds at the two ears, underpins the ability to localize sound sources along the horizontal plane, and is important for decoding complex spatial listening environments into separate objects – a critical factor in ‘cocktail-party listening’. For human listeners, the most important spatial cue is the interaural time difference (ITD). Despite many decades of neurophysiological investigations of ITD sensitivity in small mammals, and computational models aimed at accounting for human perception, a lack of concordance between these studies has hampered our understanding of how the human brain represents and processes ITDs. Further, neural coding of spatial cues might depend on factors such as head-size or hearing range, which differ considerably between humans and commonly used experimental animals. Here, using magnetoencephalography (MEG) in human listeners, and electro-corticography (ECoG) recordings in guinea pig—a small mammalrepresentative of a range of animals in which ITD coding has been assessed at the level of single-neuron recordings—we tested whether processing of ITDs in human auditory cortex accords with a frequency-dependent periodic code of ITD reported in small mammals, or whether alternative or additional processing stages implemented in psychoacoustic models of human binaural hearing must be assumed. Our data were well accounted for by a model consisting of periodically tuned ITD-detectors, and were highly consistent across the two species. The results suggest that the representation of ITD in human auditory cortex is similar to that found in other mammalian species, a representation in which neural responses to ITD are determined by phase differences relative to sound frequency rather than, for instance, the range of ITDs permitted by head size or the absolute magnitude or direction of ITD.Peer reviewe

    The average amplitude of the right-hemispheric N1m response to the frontal and rear probes.

    No full text
    <p>The responses were prominent when adaptors were located in front, in the rear, or in the right hemifield. When the adaptors were presented in the same (left) hemifield as the probe, response amplitudes were small. This is consistent with auditory cortical neurons having laterally centered and wide spatial tuning (for comparison, see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0007600#pone-0007600-g002" target="_blank">Fig. 2D</a>).</p

    Minimum current estimates of a representative subject obtained at the N1m peak latency.

    No full text
    <p>In all conditions, the activity originated mainly from the temporal areas in the proximity of auditory cortex.</p

    Experimental predictions of the population rate code derived for different numbers of neurons tuned to the left and right hemifields.

    No full text
    <p>When the proportion of neurons tuned to the left exceeded 30% of all neurons, the predicted MEG results resembled those obtained in the present experiment.</p

    Grand-averaged event-related fields measured from the left and right hemisphere.

    No full text
    <p>The smallest responses, that is, strongest adaptation was found for the conditions in which the adaptor and the probe were at the same location (black) or when the adaptor was in the same hemifield (blue and green). For adaptors at the midline or in the right hemifield (purple and red) the responses were larger and, thus, adaptation was weaker. Largest responses were found when no adaptor was presented (gray).</p
    corecore