1,726 research outputs found

    The Spectrotemporal Filter Mechanism of Auditory Selective Attention

    Get PDF
    SummaryAlthough we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, although the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli

    Regularity Extraction from Non-Adjacent Sounds

    Get PDF
    The regular behavior of sound sources helps us to make sense of the auditory environment. Regular patterns may, for instance, convey information on the identity of a sound source (such as the acoustic signature of a train moving on the rails). Yet typically, this signature overlaps in time with signals emitted from other sound sources. It is generally assumed that auditory regularity extraction cannot operate upon this mixture of signals because it only finds regularities between adjacent sounds. In this view, the auditory environment would be grouped into separate entities by means of readily available acoustic cues such as separation in frequency and location. Regularity extraction processes would then operate upon the resulting groups. Our new experimental evidence challenges this view. We presented two interleaved sound sequences which overlapped in frequency range and shared all acoustic parameters. The sequences only differed in their underlying regular patterns. We inserted deviants into one of the sequences to probe whether the regularity was extracted. In the first experiment, we found that these deviants elicited the mismatch negativity (MMN) component. Thus the auditory system was able to find the regularity between the non-adjacent sounds. Regularity extraction was not influenced by sequence cohesiveness as manipulated by the relative duration of tones and silent inter-tone-intervals. In the second experiment, we showed that a regularity connecting non-adjacent sounds was discovered only when the intervening sequence also contained a regular pattern, but not when the intervening sounds were randomly varying. This suggests that separate regular patterns are available to the auditory system as a cue for identifying signals coming from distinct sound sources. Thus auditory regularity extraction is not necessarily confined to a processing stage after initial sound grouping, but may precede grouping when other acoustic cues are unavailable

    Regularity extraction from non-adjacent sounds

    Get PDF
    The regular behavior of sound sources helps us to make sense of the auditory environment. Regular patterns may, for instance, convey information on the identity of a sound source (such as the acoustic signature of a train moving on the rails). Yet typically, this signature overlaps in time with signals emitted from other sound sources. It is generally assumed that auditory regularity extraction cannot operate upon this mixture of signals because it only finds regularities between adjacent sounds. In this view, the auditory environment would be grouped into separate entities by means of readily available acoustic cues such as separation in frequency and location. Regularity extraction processes would then operate upon the resulting groups. Our new experimental evidence challenges this view. We presented two interleaved sound sequences which overlapped in frequency range and shared all acoustic parameters. The sequences only differed in their underlying regular patterns. We inserted deviants into one of the sequences to probe whether the regularity was extracted. In the first experiment, we found that these deviants elicited the mismatch negativity (MMN) component. Thus the auditory system was able to find the regularity between the non-adjacent sounds. Regularity extraction was not influenced by sequence cohesiveness as manipulated by the relative duration of tones and silent inter-tone-intervals. In the second experiment, we showed that a regularity connecting non-adjacent sounds was discovered only when the intervening sequence also contained a regular pattern, but not when the intervening sounds were randomly varying. This suggests that separate regular patterns are available to the auditory system as a cue for identifying signals coming from distinct sound sources. Thus auditory regularity extraction is not necessarily confined to a processing stage after initial sound grouping, but may precede grouping when other acoustic cues are unavailable

    Neural Substrate of Concurrent Sound Perception: Direct Electrophysiological Recordings from Human Auditory Cortex

    Get PDF
    In everyday life, consciously or not, we are constantly disentangling the multiple auditory sources contributing to our acoustical environment. To better understand the neural mechanisms involved in concurrent sound processing, we manipulated sound onset asynchrony to induce the segregation or grouping of two concurrent sounds. Each sound consisted of amplitude-modulated tones at different carrier and modulation frequencies, allowing a cortical tagging of each sound. Electrophysiological recordings were carried out in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they performed an auditory distracting task. We found that transient and steady-state evoked responses, and induced gamma oscillatory activities were enhanced in the case of onset synchrony. These effects were mainly located in the Heschl's gyrus for steady-state responses whereas they were found in the lateral superior temporal gyrus for evoked transient responses and induced gamma oscillations. They can be related to distinct neural mechanisms such as frequency selectivity and habituation. These results in the auditory cortex provide an anatomically refined description of the neurophysiological components which might be involved in the perception of concurrent sounds

    Perceptual organization of auditory streaming-task relies on neural entrainment of the stimulus-presentation rate: MEG evidence

    Full text link
    Background: Humans are able to extract regularities from complex auditory scenes in order to form perceptually meaningful elements. It has been shown previously that this process depends critically on both the temporal integration of the sensory input over time and the degree of frequency separation between concurrent sound sources. Our goal was to examine the relationship between these two aspects by means of magnetoencephalography (MEG). To achieve this aim, we combined time-frequency analysis on a sensor space level with source analysis. Our paradigm consisted of asymmetric ABA-tone triplets wherein the B-tones were presented temporally closer to the first A-tones, providing different tempi within the same sequence. Participants attended to the slowest B-rhythm whilst the frequency separation between tones was manipulated (0-, 2-, 4- and 10-semitones). Results: The results revealed that the asymmetric ABA-triplets spontaneously elicited periodic-sustained responses corresponding to the temporal distribution of the A-B and B-A tone intervals in all conditions. Moreover, when attending to the B-tones, the neural representations of the A- and B-streams were both detectable in the scenarios which allow perceptual streaming (2-, 4- and 10-semitones). Alongside this, the steady-state responses tuned to the presentation of the B-tones enhanced significantly with increase of the frequency separation between tones. However, the strength of the B-tones related steady-state responses dominated the strength of the A-tones responses in the 10-semitones condition. Conversely, the representation of the A-tones dominated the B-tones in the cases of 2- and 4-semitones conditions, in which a greater effort was required for completing the task. Additionally, the P1 evoked fields’ component following the B-tones increased in magnitude with the increase of inter-tonal frequency difference. Conclusions: The enhancement of the evoked fields in the source space, along with the B-tones related activity of the time-frequency results, likely reflect the selective enhancement of the attended B-stream. The results also suggested a dissimilar efficiency of the temporal integration of separate streams depending on the degree of frequency separation between the sounds. Overall, the present findings suggest that the neural effects of auditory streaming could be directly captured in the time-frequency spectrum at the sensor-space level.<br

    Sounds in noise: Behavioral and neural studies of illusory continuity and discontinuity

    Get PDF
    ability to parse an auditory scene into meaningful components varies greatly between individuals; some are able to parse out and write down competing musical pieces while others struggle to understand each word whenever they have to converse in a noisy environment. Using a simple discrimination task, healthy, normally-heari ng adult participants were asked to judge whether a pure tone (with or without amplitude modulation) was continuous or contained a gap. One quarter of the participants consistently heard a gap when none was present, if the tone was accompanied by a higher-frequency noise burst with a lower edge beginning one octave away from the tone (that did not have any energy overlapping the tone). This novel form of informational masking (perceptual interference between components with non-overlapping sound energy) was named 'illusory auditory discontinuity\u2019. The phenomenon appears to reflect natural differences in auditory processing rather than differences in decision-making strategies because: (1) susceptibility to illusory discontinuity correlates with individual differences in auditory streaming (measured using a classical ABA sequential paradigm); and (2) electroencephalographic responses elicited by tones overlaid by short noise bursts (when these sounds are not the focus of attention) are significantly correlated with the occurrence of illusory auditory discontinuity in both an early event-related potential (ERP) component (40-66 ms), and a later ERP component (270-350 ms) after noise onset. Participants prone to illusory discontinuity also tended not to perceive the \u2018auditory continuity illusion\u2019 (in which a tone is heard continuing under a burst of noise centered on the tone frequency that completely masks it) at short noise durations, but reliably perceived the auditory continuity illusion at longer noise durations. These results suggest that a number of attributes describing how individuals differentially parse complex auditory scenes are related to individual differences in two potentially independent attributes of neural processing, reflected here by EEG waveform differences at ~50 msec and ~300 msec after noise onset. Neural correlates of the auditory continuity illusion were also investigated by adjusting masker loudness, so that when listeners were given physically identical stimuli, they correctly detected the gap in a target tone on some trials, while on other trials they reported the tone as continuous (experiencing illusory continuity). High er power of low-frequency EEG activity (in the delta-theta range, <6 Hz) was observed prior to the onset of tones that were subsequently judged as discontinuous, with no other consistent EEG differences found after the onset of tones. These data suggest that the occurrence of the continuity illusion may depend on the brain state that exists immediately before a trial begins

    Computational models of auditory perception from feature extraction to stream segregation and behavior

    Get PDF
    This is the final version. Available on open access from Elsevier via the DOI in this recordData availability: This is a review study, and as such did not generate any new data.Audition is by nature dynamic, from brainstem processing on sub-millisecond time scales, to segregating and tracking sound sources with changing features, to the pleasure of listening to music and the satisfaction of getting the beat. We review recent advances from computational models of sound localization, of auditory stream segregation and of beat perception/generation. A wealth of behavioral, electrophysiological and imaging studies shed light on these processes, typically with synthesized sounds having regular temporal structure. Computational models integrate knowledge from different experimental fields and at different levels of description. We advocate a neuromechanistic modeling approach that incorporates knowledge of the auditory system from various fields, that utilizes plausible neural mechanisms, and that bridges our understanding across disciplines.Engineering and Physical Sciences Research Council (EPSRC
    • 

    corecore