12 research outputs found

    The Glasgow Voice Memory Test: Assessing the ability to memorize and recognize unfamiliar voices

    Get PDF
    One thousand one hundred and twenty subjects as well as a developmental phonagnosic subject (KH) along with age-matched controls performed the Glasgow Voice Memory Test, which assesses the ability to encode and immediately recognize, through an old/new judgment, both unfamiliar voices (delivered as vowels, making language requirements minimal) and bell sounds. The inclusion of non-vocal stimuli allows the detection of significant dissociations between the two categories (vocal vs. non-vocal stimuli). The distributions of accuracy and sensitivity scores (d’) reflected a wide range of individual differences in voice recognition performance in the population. As expected, KH showed a dissociation between the recognition of voices and bell sounds, her performance being significantly poorer than matched controls for voices but not for bells. By providing normative data of a large sample and by testing a developmental phonagnosic subject, we demonstrated that the Glasgow Voice Memory Test, available online and accessible fromall over the world, can be a valid screening tool (~5 min) for a preliminary detection of potential cases of phonagnosia and of “super recognizers” for voices

    Speaker Sex Perception from Spontaneous and Volitional Nonverbal Vocalizations.

    Get PDF
    In two experiments, we explore how speaker sex recognition is affected by vocal flexibility, introduced by volitional and spontaneous vocalizations. In Experiment 1, participants judged speaker sex from two spontaneous vocalizations, laughter and crying, and volitionally produced vowels. Striking effects of speaker sex emerged: For male vocalizations, listeners' performance was significantly impaired for spontaneous vocalizations (laughter and crying) compared to a volitional baseline (repeated vowels), a pattern that was also reflected in longer reaction times for spontaneous vocalizations. Further, performance was less accurate for laughter than crying. For female vocalizations, a different pattern emerged. In Experiment 2, we largely replicated the findings of Experiment 1 using spontaneous laughter, volitional laughter and (volitional) vowels: here, performance for male vocalizations was impaired for spontaneous laughter compared to both volitional laughter and vowels, providing further evidence that differences in volitional control over vocal production may modulate our ability to accurately perceive speaker sex from vocal signals. For both experiments, acoustic analyses showed relationships between stimulus fundamental frequency (F0) and the participants' responses. The higher the F0 of a vocal signal, the more likely listeners were to perceive a vocalization as being produced by a female speaker, an effect that was more pronounced for vocalizations produced by males. We discuss the results in terms of the availability of salient acoustic cues across different vocalizations

    Visual adaptation enhances action sound discrimination

    Get PDF
    Prolonged exposure, or adaptation, to a stimulus in one modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in one modality can bias perception in another modality. Here we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory or audiovisual hand actions enhanced discrimination between two subsequently presented hand action sounds. Discrimination was most enhanced when the visual action ‘matched’ the auditory action. In addition, prior adaptation to a visual, auditory or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation induced crossmodal enhancements cannot be explained by post-perceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli

    Mechanisms underlying speech sound discrimination and categorization in humans and zebra finches

    Get PDF
    Speech sound categorization in birds seems in many ways comparable to that by humans, but it is unclear what mechanisms underlie such categorization. To examine this, we trained zebra finches and humans to discriminate two pairs of edited speech sounds that varied either along one dimension (vowel or speaker sex) or along two dimensions (vowel and speaker sex). Sounds could be memorized individually or categorized based on one dimension or by integrating or combining both dimensions. Once training was completed, we tested generalization to new speech sounds that were either more extreme, more ambiguous (i.e., close to the category boundary), or within-category intermediate between the trained sounds. Both humans and zebra finches learned the one-dimensional stimulus-response mappings faster than the two-dimensional mappings. Humans performed higher on the trained, extreme and within-category intermediate test-sounds than on the ambiguous ones. Some individual birds also did so, but most performed higher on the trained exemplars than on the extreme, within-category intermediate and ambiguous test-sounds. These results suggest that humans rely on rule learning to form categories and show poor performance when they cannot apply a rule. Birds rely mostly on exemplar-based memory with weak evidence for rule learning
    corecore