166 research outputs found

    Neurophysiological Mechanisms Involved in Auditory Perceptual Organization

    Get PDF
    In our complex acoustic environment, we are confronted with a mixture of sounds produced by several simultaneous sources. However, we rarely perceive these sounds as incomprehensible noise. Our brain uses perceptual organization processes to independently follow the emission of each sound source over time. If the acoustic properties exploited in these processes are well-established, the neurophysiological mechanisms involved in auditory scene analysis remain unclear and have recently raised more interest. Here, we review the studies investigating these mechanisms using electrophysiological recordings from the cochlear nucleus to the auditory cortex, in animals and humans. Their findings reveal that basic mechanisms such as frequency selectivity, forward suppression and multi-second habituation shape the automatic brain responses to sounds in a way that can account for several important characteristics of perceptual organization of both simultaneous and successive sounds. One challenging question remains unresolved: how are the resulting activity patterns integrated to yield the corresponding conscious percepts

    Non-Verbal Auditory Cognition in Patients with Temporal Epilepsy Before and After Anterior Temporal Lobectomy

    Get PDF
    For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL) – i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri – is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE) has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits

    Distinct Gamma-Band Components Reflect the Short-Term Memory Maintenance of Different Sound Lateralization Angles

    Get PDF
    Oscillatory activity in human electro- or magnetoencephalogram has been related to cortical stimulus representations and their modulation by cognitive processes. Whereas previous work has focused on gamma-band activity (GBA) during attention or maintenance of representations, there is little evidence for GBA reflecting individual stimulus representations. The present study aimed at identifying stimulus-specific GBA components during auditory spatial short-term memory. A total of 28 adults were assigned to 1 of 2 groups who were presented with only right- or left-lateralized sounds, respectively. In each group, 2 sample stimuli were used which differed in their lateralization angles (15° or 45°) with respect to the midsagittal plane. Statistical probability mapping served to identify spectral amplitude differences between 15° versus 45° stimuli. Distinct GBA components were found for each sample stimulus in different sensors over parieto-occipital cortex contralateral to the side of stimulation peaking during the middle 200–300 ms of the delay phase. The differentiation between “preferred” and “nonpreferred” stimuli during the final 100 ms of the delay phase correlated with task performance. These findings suggest that the observed GBA components reflect the activity of distinct networks tuned to spatial sound features which contribute to the maintenance of task-relevant information in short-term memory

    Stream segregation in the anesthetized auditory cortex

    Get PDF
    Auditory stream segregation describes the way that sounds are perceptually segregated into groups or streams on the basis of perceptual attributes such as pitch or spectral content. For sequences of pure tones, segregation depends on the tones' proximity in frequency and time. In the auditory cortex (and elsewhere) responses to sequences of tones are dependent on stimulus conditions in a similar way to the perception of these stimuli. However, although highly dependent on stimulus conditions, perception is also clearly influenced by factors unrelated to the stimulus, such as attention. Exactly how ‘bottom-up’ sensory processes and non-sensory ‘top-down’ influences interact is still not clear. Here, we recorded responses to alternating tones (ABAB 
) of varying frequency difference (FD) and rate of presentation (PR) in the auditory cortex of anesthetized guinea-pigs. These data complement previous studies, in that top-down processing resulting from conscious perception should be absent or at least considerably attenuated. Under anesthesia, the responses of cortical neurons to the tone sequences adapted rapidly, in a manner sensitive to both the FD and PR of the sequences. While the responses to tones at frequencies more distant from neuron best frequencies (BFs) decreased as the FD increased, the responses to tones near to BF increased, consistent with a release from adaptation, or forward suppression. Increases in PR resulted in reductions in responses to all tones, but the reduction was greater for tones further from BF. Although asymptotically adapted responses to tones showed behavior that was qualitatively consistent with perceptual stream segregation, responses reached asymptote within 2 s, and responses to all tones were very weak at high PRs (>12 tones per second). A signal-detection model, driven by the cortical population response, made decisions that were dependent on both FD and PR in ways consistent with perceptual stream segregation. This included showing a range of conditions over which decisions could be made either in favor of perceptual integration or segregation, depending on the model ‘decision criterion’. However, the rate of ‘build-up’ was more rapid than seen perceptually, and at high PR responses to tones were sometimes so weak as to be undetectable by the model. Under anesthesia, adaptation occurs rapidly, and at high PRs tones are generally poorly represented, which compromises the interpretation of the experiment. However, within these limitations, these results complement experiments in awake animals and humans. They generally support the hypothesis that ‘bottom-up’ sensory processing plays a major role in perceptual organization, and that processes underlying stream segregation are active in the absence of attention

    Neural Correlates of Auditory Perceptual Awareness under Informational Masking

    Get PDF
    Our ability to detect target sounds in complex acoustic backgrounds is often limited not by the ear's resolution, but by the brain's information-processing capacity. The neural mechanisms and loci of this “informational masking” are unknown. We combined magnetoencephalography with simultaneous behavioral measures in humans to investigate neural correlates of informational masking and auditory perceptual awareness in the auditory cortex. Cortical responses were sorted according to whether or not target sounds were detected by the listener in a complex, randomly varying multi-tone background known to produce informational masking. Detected target sounds elicited a prominent, long-latency response (50–250 ms), whereas undetected targets did not. In contrast, both detected and undetected targets produced equally robust auditory middle-latency, steady-state responses, presumably from the primary auditory cortex. These findings indicate that neural correlates of auditory awareness in informational masking emerge between early and late stages of processing within the auditory cortex

    Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area

    Get PDF
    Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level–dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers

    Faster maturation of selective attention in musically trained children and adolescents : Converging behavioral and event-related potential evidence

    Get PDF
    Previous work suggests that musical training in childhood is associated with enhanced executive functions. However, it is unknown whether this advantage extends to selective attention-another central aspect of executive control. We recorded a well-established event-related potential (ERP) marker of distraction, the P3a, during an audio-visual task to investigate the maturation of selective attention in musically trained children and adolescents aged 10-17 years and a control group of untrained peers. The task required categorization of visual stimuli, while a sequence of standard sounds and distracting novel sounds were presented in the background. The music group outperformed the control group in the categorization task and the younger children in the music group showed a smaller P3a to the distracting novel sounds than their peers in the control group. Also, a negative response elicited by the novel sounds in the N1/MMN time range (similar to 150-200 ms) was smaller in the music group. These results indicate that the music group was less easily distracted by the task-irrelevant sound stimulation and gated the neural processing of the novel sounds more efficiently than the control group. Furthermore, we replicated our previous finding that, relative to the control group, the musically trained children and adolescents performed faster in standardized tests for inhibition and set shifting. These results provide novel converging behavioral and electrophysiological evidence from a cross-modal paradigm for accelerated maturation of selective attention in musically trained children and adolescents and corroborate the association between musical training and enhanced inhibition and set shifting.Peer reviewe

    Cortical gamma-oscillations modulated by auditory–motor tasks-intracranial recording in patients with epilepsy

    Full text link
    Human activities often involve hand-motor responses following external auditory–verbal commands. It has been believed that hand movements are predominantly driven by the contralateral primary sensorimotor cortex, whereas auditory–verbal information is processed in both superior temporal gyri. It remains unknown whether cortical activation in the superior temporal gyrus during an auditory–motor task is affected by laterality of hand-motor responses. Here, event-related Γ-oscillations were intracranially recorded as quantitative measures of cortical activation; we determined how cortical structures were activated by auditory-cued movement using each hand in 15 patients with focal epilepsy. Auditory–verbal stimuli elicited augmentation of Γ-oscillations in a posterior portion of the superior temporal gyrus, whereas hand-motor responses elicited Γ-augmentation in the pre- and postcentral gyri. The magnitudes of such Γ-augmentation in the superior temporal, precentral, and postcentral gyri were significantly larger when the hand contralateral to the recorded hemisphere was required to be used for motor responses, compared with when the ipsilateral hand was. The superior temporal gyrus in each hemisphere might play a greater pivotal role when the contralateral hand needs to be used for motor responses, compared with when the ipsilateral hand does. Hum Brain Mapp, 2010. © 2010 Wiley-Liss, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78227/1/20963_ftp.pd

    A Blueprint for Real-Time Functional Mapping via Human Intracranial Recordings

    Get PDF
    International audienceBACKGROUND: The surgical treatment of patients with intractable epilepsy is preceded by a pre-surgical evaluation period during which intracranial EEG recordings are performed to identify the epileptogenic network and provide a functional map of eloquent cerebral areas that need to be spared to minimize the risk of post-operative deficits. A growing body of research based on such invasive recordings indicates that cortical oscillations at various frequencies, especially in the gamma range (40 to 150 Hz), can provide efficient markers of task-related neural network activity. PRINCIPAL FINDINGS: Here we introduce a novel real-time investigation framework for mapping human brain functions based on online visualization of the spectral power of the ongoing intracranial activity. The results obtained with the first two implanted epilepsy patients who used the proposed online system illustrate its feasibility and utility both for clinical applications, as a complementary tool to electrical stimulation for presurgical mapping purposes, and for basic research, as an exploratory tool used to detect correlations between behavior and oscillatory power modulations. Furthermore, our findings suggest a putative role for high gamma oscillations in higher-order auditory processing involved in speech and music perception. CONCLUSION/SIGNIFICANCE: The proposed real-time setup is a promising tool for presurgical mapping, the investigation of functional brain dynamics, and possibly for neurofeedback training and brain computer interfaces
    • 

    corecore