30 research outputs found

    The Dialog of Primary and Non-primary Auditory Cortex at the 'Cocktail Party'

    Get PDF
    In this issue of Neuron, O'Sullivan et al. (2019) measured electro-cortical responses to "cocktail party" speech mixtures in neurosurgical patients and demonstrated that the selective enhancement of attended speech is achieved through the adaptive weighting of primary auditory cortex output by non-primary auditory cortex.</p

    Effects of cross-modal asynchrony on informational masking in human cortex

    Get PDF
    In many everyday listening situations, an otherwise audible sound may go unnoticed amid multiple other sounds. This auditory phenomenon, called informational masking (IM), is sensitive to visual input and involves early (50-250 msec) activity in the auditory cortex (the so-called awareness-related negativity). It is still unclear whether and how the timing of visual input influences the neural correlates of IM in auditory cortex. To address this question, we obtained simultaneous behavioral and neural measures of IM from human listeners in the presence of a visual input stream and varied the asynchrony between the visual stream and the rhythmic auditory target stream (in-phase, antiphase, or random). Results show effects of cross-modal asynchrony on both target detectability (RT and sensitivity) and the awareness-related negativity measured with EEG, which were driven primarily by antiphasic audiovisual stimuli. The neural effect was limited to the interval shortly before listeners' behavioral report of the target. Our results indicate that the relative timing of visual input can influence the IM of a target sound in the human auditory cortex. They further show that this audiovisual influence occurs early during the perceptual buildup of the target sound. In summary, these findings provide novel insights into the interaction of IM and multisensory interaction in the human brain.</p

    Listening to speech in noisy scenes: Antithetical contribution of primary and non-primary auditory cortex

    No full text
    Invasive and non-invasive electrophysiological measurements during “cocktail-party”-like listening indicate that neural activity in human auditory cortex (AC) “tracks” the envelope of relevant speech. Due to the measurements’ limited coverage and/or spatial resolution, however, the distinct contribution of primary and non-primary auditory areas remains unclear. Using 7-Tesla fMRI, here we measured brain responses of participants attending to one speaker, without and with another concurrent speaker. Using voxel-wise modeling, we observed significant speech envelope tracking in bilateral Heschl’s gyrus (HG) and middle superior temporal sulcus (mSTS), despite the sluggish fMRI responses and slow temporal sampling. Neural activity was either positively (HG) or negatively (mSTS) correlated to the speech envelope. Spatial pattern analyses indicated that whereas tracking in HG reflected both relevant and (to a lesser extent) non-relevant speech, right mSTS selectively represented the relevant speech signal. These results indicate that primary and non-primary AC antithetically process ongoing speech suggesting a push-pull of acoustic and linguistic information

    Cortical Processing of Distracting Speech in Noisy Auditory Scenes Depends on Perceptual Demand

    No full text
    Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0-200ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.</p

    Effects of cross-modal asynchrony on informational masking in human cortex

    No full text
    Sounds from "Hausfeld, L., Gutschalk, A., Formisano, E., Riecke, L. (2017). Effects of cross-modal asynchrony on informational masking in human cortex. Journal of Cognitive Neuroscience". Informational masking paradigm containing a pulsating tone (target) embedded in a multi-tone cloud (masker)

    Activity in human auditory cortex represents spatial separation between concurrent sounds

    Get PDF
    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent
    corecore