3,357 research outputs found

    Comodulation enhances signal detection via priming of auditory cortical circuits

    Get PDF
    Acoustic environments are composed of complex overlapping sounds that the auditory system is required to segregate into discrete perceptual objects. The functions of distinct auditory processing stations in this challenging task are poorly understood. Here we show a direct role for mouse auditory cortex in detection and segregation of acoustic information. We measured the sensitivity of auditory cortical neurons to brief tones embedded in masking noise. By altering spectrotemporal characteristics of the masker, we reveal that sensitivity to pure tone stimuli is strongly enhanced in coherently modulated broadband noise, corresponding to the psychoacoustic phenomenon comodulation masking release. Improvements in detection were largest following priming periods of noise alone, indicating that cortical segregation is enhanced over time. Transient opsin-mediated silencing of auditory cortex during the priming period almost completely abolished these improvements, suggesting that cortical processing may play a direct and significant role in detection of quiet sounds in noisy environments

    Resonant Neural Dynamics of Speech Perception

    Full text link
    What is the neural representation of a speech code as it evolves in time? How do listeners integrate temporally distributed phonemic information across hundreds of milliseconds, even backwards in time, into coherent representations of syllables and words? What sorts of brain mechanisms encode the correct temporal order, despite such backwards effects, during speech perception? How does the brain extract rate-invariant properties of variable-rate speech? This article describes an emerging neural model that suggests answers to these questions, while quantitatively simulating challenging data about audition, speech and word recognition. This model includes bottom-up filtering, horizontal competitive, and top-down attentional interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech and word recognition code is suggested to be a resonant wave of activation across such a network, and a percept of silence is proposed to be a temporal discontinuity in the rate with which such a resonant wave evolves. Properties of these resonant waves can be traced to the brain mechanisms whereby auditory, speech, and language representations are learned in a stable way through time. Because resonances are proposed to control stable learning, the model is called an Adaptive Resonance Theory, or ART, model.Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-01-1-0624)

    Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study.

    Get PDF
    In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas

    The mechanisms of tinnitus: perspectives from human functional neuroimaging

    Get PDF
    In this review, we highlight the contribution of advances in human neuroimaging to the current understanding of central mechanisms underpinning tinnitus and explain how interpretations of neuroimaging data have been guided by animal models. The primary motivation for studying the neural substrates of tinnitus in humans has been to demonstrate objectively its representation in the central auditory system and to develop a better understanding of its diverse pathophysiology and of the functional interplay between sensory, cognitive and affective systems. The ultimate goal of neuroimaging is to identify subtypes of tinnitus in order to better inform treatment strategies. The three neural mechanisms considered in this review may provide a basis for TI classification. While human neuroimaging evidence strongly implicates the central auditory system and emotional centres in TI, evidence for the precise contribution from the three mechanisms is unclear because the data are somewhat inconsistent. We consider a number of methodological issues limiting the field of human neuroimaging and recommend approaches to overcome potential inconsistency in results arising from poorly matched participants, lack of appropriate controls and low statistical power

    Functional anatomy of the masking level difference, an fMRI study

    Get PDF
    Introduction: Masking level differences (MLDs) are differences in the hearing threshold for the detection of a signal presented in a noise background, where either the phase of the signal or noise is reversed between ears. We use N0/Nπ to denote noise presented in-phase/out-of-phase between ears and S0/Sπ to denote a 500 Hz sine wave signal as in/out-of-phase. Signal detection level for the noise/signal combinations N0Sπ and NπS0 is typically 10-20 dB better than for N0S0. All combinations have the same spectrum, level, and duration of both the signal and the noise. Methods: Ten participants (5 female), age: 22-43, with N0Sπ-N0S0 MLDs greater than 10 dB, were imaged using a sparse BOLD fMRI sequence, with a 9 second gap (1 second quiet preceding stimuli). Band-pass (400-600 Hz) noise and an enveloped signal (.25 second tone burst, 50% duty-cycle) were used to create the stimuli. Brain maps of statistically significant regions were formed from a second-level analysis using SPM5. Results: The contrast NπS0- N0Sπ had significant regions of activation in the right pulvinar, corpus callosum, and insula bilaterally. The left inferior frontal gyrus had significant activation for contrasts N0Sπ-N0S0 and NπS0-N0S0. The contrast N0S0-N0Sπ revealed a region in the right insula, and the contrast N0S0-NπS0 had a region of significance in the left insula. Conclusion: Our results extend the view that the thalamus acts as a gating mechanism to enable dichotic listening, and suggest that MLD processing is accomplished through thalamic communication with the insula, which communicate across the corpus callosum to either enhance or diminish the binaural signal (depending on the MLD condition). The audibility improvement of the signal with both MLD conditions is likely reflected by activation in the left inferior frontal gyrus, a late stage in the what/where model of auditory processing. © 2012 Wack et al

    Enhanced amplitude modulations contribute to the Lombard intelligibility benefit: Evidence from the Nijmegen Corpus of Lombard Speech

    No full text
    Speakers adjust their voice when talking in noise, which is known as Lombard speech. These acoustic adjustments facilitate speech comprehension in noise relative to plain speech (i.e., speech produced in quiet). However, exactly which characteristics of Lombard speech drive this intelligibility benefit in noise remains unclear. This study assessed the contribution of enhanced amplitude modulations to the Lombard speech intelligibility benefit by demonstrating that (1) native speakers of Dutch in the Nijmegen Corpus of Lombard Speech (NiCLS) produce more pronounced amplitude modulations in noise vs. in quiet; (2) more enhanced amplitude modulations correlate positively with intelligibility in a speech-in-noise perception experiment; (3) transplanting the amplitude modulations from Lombard speech onto plain speech leads to an intelligibility improvement, suggesting that enhanced amplitude modulations in Lombard speech contribute towards intelligibility in noise. Results are discussed in light of recent neurobiological models of speech perception with reference to neural oscillators phase-locking to the amplitude modulations in speech, guiding the processing of speech

    Auditory spatial deficits following hemispheric lesions: Dissociation of explicit and implicit processing.

    Get PDF
    Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage

    The Resonant Dynamics of Speech Perception: Interword Integration and Duration-Dependent Backward Effects

    Full text link
    How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? During fluent speech perception, variations in the durations of speech sounds and silent pauses can produce different pereeived groupings. For exarnple, increasing the silence interval between the words "gray chip" may result in the percept "great chip", whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (Repp et al., 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In AHTWORD, sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down supportto their phonemic items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than wotking memory activation, it can be influenced by information presented after relatively long intervening silence intervals. The same phonemic input can hereby yield different groupings depending on its arrival time. Processes of resonant transfer and competitive teaming help determine which groupings win the competition. Habituating levels of neurotransmitter along the pathways that sustain the resonant feedback lead to a resonant collapsee that permits the formation of subsequent. resonances.Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-92-J-1309, NOOO14-95-1-0657

    Roaring lions and chirruping lemurs: How the brain encodes sound objects in space.

    Get PDF
    The dual-stream model of auditory processing postulates separate processing streams for sound meaning and for sound location. The present review draws on evidence from human behavioral and activation studies as well as from lesion studies to argue for a position-linked representation of sound objects that is distinct both from the position-independent representation within the ventral/What stream and from the explicit sound localization processing within the dorsal/Where stream

    Determination and evaluation of clinically efficient stopping criteria for the multiple auditory steady-state response technique

    Get PDF
    Background: Although the auditory steady-state response (ASSR) technique utilizes objective statistical detection algorithms to estimate behavioural hearing thresholds, the audiologist still has to decide when to terminate ASSR recordings introducing once more a certain degree of subjectivity. Aims: The present study aimed at establishing clinically efficient stopping criteria for a multiple 80-Hz ASSR system. Methods: In Experiment 1, data of 31 normal hearing subjects were analyzed off-line to propose stopping rules. Consequently, ASSR recordings will be stopped when (1) all 8 responses reach significance and significance can be maintained for 8 consecutive sweeps; (2) the mean noise levels were ≤ 4 nV (if at this “≤ 4-nV” criterion, p-values were between 0.05 and 0.1, measurements were extended only once by 8 sweeps); and (3) a maximum amount of 48 sweeps was attained. In Experiment 2, these stopping criteria were applied on 10 normal hearing and 10 hearing-impaired adults to asses the efficiency. Results: The application of these stopping rules resulted in ASSR threshold values that were comparable to other multiple-ASSR research with normal hearing and hearing-impaired adults. Furthermore, in 80% of the cases, ASSR thresholds could be obtained within a time-frame of 1 hour. Investigating the significant response-amplitudes of the hearing-impaired adults through cumulative curves indicated that probably a higher noise-stop criterion than “≤ 4 nV” can be used. Conclusions: The proposed stopping rules can be used in adults to determine accurate ASSR thresholds within an acceptable time-frame of about 1 hour. However, additional research with infants and adults with varying degrees and configurations of hearing loss is needed to optimize these criteria
    corecore