6 research outputs found

    Speaker Sex Perception from Spontaneous and Volitional Nonverbal Vocalizations.

    Get PDF
    In two experiments, we explore how speaker sex recognition is affected by vocal flexibility, introduced by volitional and spontaneous vocalizations. In Experiment 1, participants judged speaker sex from two spontaneous vocalizations, laughter and crying, and volitionally produced vowels. Striking effects of speaker sex emerged: For male vocalizations, listeners' performance was significantly impaired for spontaneous vocalizations (laughter and crying) compared to a volitional baseline (repeated vowels), a pattern that was also reflected in longer reaction times for spontaneous vocalizations. Further, performance was less accurate for laughter than crying. For female vocalizations, a different pattern emerged. In Experiment 2, we largely replicated the findings of Experiment 1 using spontaneous laughter, volitional laughter and (volitional) vowels: here, performance for male vocalizations was impaired for spontaneous laughter compared to both volitional laughter and vowels, providing further evidence that differences in volitional control over vocal production may modulate our ability to accurately perceive speaker sex from vocal signals. For both experiments, acoustic analyses showed relationships between stimulus fundamental frequency (F0) and the participants' responses. The higher the F0 of a vocal signal, the more likely listeners were to perceive a vocalization as being produced by a female speaker, an effect that was more pronounced for vocalizations produced by males. We discuss the results in terms of the availability of salient acoustic cues across different vocalizations

    Is speech alignment to talkers or tasks?

    No full text
    Speech alignment, the tendency of individuals to subtly imitate each other’s speaking style, is often assessed by comparing a subject’s baseline and shadowed utterances to a model’s utterances often through perceptual ratings. These types of comparisons provide information about the occurrence of a change in subject’s speech, but do not indicate that this change is towards the specific shadowed model. Three studies investigated whether alignment is specific to a shadowed model. Experiment 1 involved the classic baseline to shadowed comparison to confirm that subjects did, in fact, sound more like their model when they shadowed, relative to any pre-existing similarities between a subject and model. Experiment 2 tested whether subjects’ utterances sounded more similar to the model they had shadowed or to another unshadowed model. Experiment 3 examined whether subjects’ utterances sounded more similar to the model they had shadowed or to another subject who shadowed a different model. Results of all experiments revealed that subjects sounded more similar to the model they had shadowed. This suggests that shadowing-based speech alignment is not just a change; it is a change in the direction of the shadowed model, specifically
    corecore