83 research outputs found

    Multisensory Perceptual Learning of Temporal Order: Audiovisual Learning Transfers to Vision but Not Audition

    Get PDF
    Background: An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Methodology/Principal Findings: Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes

    Vocal Accuracy and Neural Plasticity Following Micromelody-Discrimination Training

    Get PDF
    Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy.We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing.Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production

    Rhythmic masking release: contribution of cues for perceptual organization to the cross-spectral fusion of concurrent narrow-band noises.

    No full text
    The contribution of temporal asynchrony, spatial separation, and frequency separation to the cross-spectral fusion of temporally contiguous brief narrow-band noise bursts was studied using the Rhythmic Masking Release paradigm (RMR). RMR involves the discrimination of one of two possible rhythms, despite perceptual masking of the rhythm by an irregular sequence of sounds identical to the rhythmic bursts, interleaved among them. The release of the rhythm from masking can be induced by causing the fusion of the irregular interfering sounds with concurrent "flanking" sounds situated in different frequency regions. The accuracy and the rated clarity of the identified rhythm in a 2-AFC procedure were employed to estimate the degree of fusion of the interferring sounds with flanking sounds. The results suggest that while synchrony fully fuses short-duration noise bursts across frequency and across space (i.e., across ears and loudspeakers), an asynchrony of 20–40 ms produces no fusion. Intermediate asynchronies of 10–20 ms produce partial fusion, where the presence of other cues is critical for unambiguous grouping. Though frequency and spatial separation reduced fusion, neither of these manipulations was sufficient to abolish it. For the parameters varied in this study, stimulus onset asynchrony was the dominant cue determining fusion, but there were additive effects of the other cues. Temporal synchrony appears to be critical in determining whether brief sounds with abrupt onsets and offsets are heard as one event or more than one. Β©2002 Acoustical Society of America
    • …
    corecore