37 research outputs found

    Representations of specific acoustic patterns in the auditory cortex and hippocampus

    Get PDF
    Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be 'decoded' from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt

    An analysis of the masking of speech by competing speech using self-report data (L)

    Get PDF
    Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85–99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations

    Generalization of auditory expertise in audio engineers and instrumental musicians

    Get PDF
    From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training

    Informational masking of speech for elderly listeners

    No full text
    Elderly listeners generally have more difficulty than young listeners understanding one talker in the presence of another. Some of this difficulty may be attributable to "informational masking", which is the extra listening difficulty due to competing speech in comparison to acoustically equivalent noise maskers. A range of methods are used to measure informational masking for young and elderly listeners.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Repetition detection and rapid auditory learning for stochastic tone clouds

    No full text
    Stochastic sounds are useful to probe auditory memory, as they require listeners to learn unpredictable and novel patterns under controlled experimental conditions. Previous studies using white noise or random click trains have demonstrated rapid auditory learning for instances of such a class of sounds. Here, we tested stochastic sounds that enabled parametrical control of spectrotemporal complexity: tone clouds. Tone clouds were defined as broadband combinations of tone pips at randomized frequencies and onset times. Varying the density of tones covered a perceptual range from random melodies to noise. Results showed that listeners could detect repeating patterns in tone clouds at all tested densities, with sparse tone clouds being the easiest. A model estimating amplitude modulation within cochlear filters showed that repetition detection was correlated with the amount of amplitude modulation at lower rates. Rapid learning of individual tone clouds was also observed, again for all densities. Tone clouds thus provide a tool to probe auditory learning in a variety of task-difficulty settings, which could be useful for clinical or neurophysiological studies. They also show that rapid auditory learning operates over the full range of spectrotemporal complexity typical of natural sounds, essentially from melodies to noise

    Repetition detection and rapid auditory learning for stochastic tone clouds

    No full text
    Stochastic sounds are useful to probe auditory memory, as they require listeners to learn unpredictable and novel patterns under controlled experimental conditions. Previous studies using white noise or random click trains have demonstrated rapid auditory learning for instances of such a class of sounds. Here, we tested stochastic sounds that enabled parametrical control of spectrotemporal complexity: tone clouds. Tone clouds were defined as broadband combinations of tone pips at randomized frequencies and onset times. Varying the density of tones covered a perceptual range from random melodies to noise. Results showed that listeners could detect repeating patterns in tone clouds at all tested densities, with sparse tone clouds being the easiest. A model estimating amplitude modulation within cochlear filters showed that repetition detection was correlated with the amount of amplitude modulation at lower rates. Rapid learning of individual tone clouds was also observed, again for all densities. Tone clouds thus provide a tool to probe auditory learning in a variety of task-difficulty settings, which could be useful for clinical or neurophysiological studies. They also show that rapid auditory learning operates over the full range of spectrotemporal complexity typical of natural sounds, essentially from melodies to noise

    Voice selectivity in the temporal voice area despite matched low-level acoustic cues.

    Get PDF
    In human listeners, the temporal voice areas (TVAs) are regions of the superior temporal gyrus and sulcus that respond more to vocal sounds than a range of nonvocal control sounds, including scrambled voices, environmental noises, and animal cries. One interpretation of the TVA's selectivity is based on low-level acoustic cues: compared to control sounds, vocal sounds may have stronger harmonic content or greater spectrotemporal complexity. Here, we show that the right TVA remains selective to the human voice even when accounting for a variety of acoustical cues. Using fMRI, single vowel stimuli were contrasted with single notes of musical instruments with balanced harmonic-to-noise ratios and pitches. We also used "auditory chimeras", which preserved subsets of acoustical features of the vocal sounds. The right TVA was preferentially activated only for the natural human voice. In particular, the TVA did not respond more to artificial chimeras preserving the exact spectral profile of voices. Additional acoustic measures, including temporal modulations and spectral complexity, could not account for the increased activation. These observations rule out simple acoustical cues as a basis for voice selectivity in the TVAs
    corecore