26 research outputs found

    Recognition of emotions in German laughter across cultures

    Get PDF
    Data availability: All data generated in this study are included in the supplementary online materials of this published article.Laughter conveys a wide range of information relevant for social interaction. In previous research we have shown that laughter can convey information about the sender’s emotional state, however other research did not find such an effect. This paper aims to replicate our previous study using participant samples of diverse cultural backgrounds. 161 participants from Poland, the UK, India, Hong Kong, and other countries classified 121 spontaneously emitted German laughter sounds according to the laughter type, i.e., joyful, schadenfreude, and tickling laughter. Results showed that all participant groups classified the laughter sounds above chance level, and that there is a slight ingroup advantage for Western listeners. This suggests that classification of laughter according to the sender’s emotional state is possible across different cultures, and that there might be a small advantage for classifying laughter of close cultural proximity

    Vocal Expression of Affective States in Spontaneous Laughter reveals the Bright and the Dark Side of Laughter

    Get PDF
    Data availability: Data have been made publicly available at Figshare and can be accessed at https://doi.org/10.17633/rd.brunel. 15028296.Copyright © The Author(s) 2022. It has been shown that the acoustical signal of posed laughter can convey afective information to the listener. However, because posed and spontaneous laughter difer in a number of signifcant aspects, it is unclear whether afective communication generalises to spontaneous laughter. To answer this question, we created a stimulus set of 381 spontaneous laughter audio recordings, produced by 51 diferent speakers, resembling diferent types of laughter. In Experiment 1, 159 participants were presented with these audio recordings without any further information about the situational context of the speakers and asked to classify the laughter sounds. Results showed that joyful, tickling, and schadenfreude laughter could be classifed signifcantly above chance level. In Experiment 2, 209 participants were presented with a subset of 121 laughter recordings correctly classifed in Experiment 1 and asked to rate the laughter according to four emotional dimensions, i.e., arousal, dominance, sender’s valence, and receiver-directed valence. Results showed that laughter types difered signifcantly in their ratings on all dimensions. Joyful laughter and tickling laughter both showed a positive sender’s valence and receiver-directed valence, whereby tickling laughter had a particularly high arousal. Schadenfreude had a negative receiver-directed valence and a high dominance, thus providing empirical evidence for the existence of a dark side in spontaneous laughter. The present results suggest that with the evolution of human social communication laughter diversifed from the former play signal of non-human primates to a much more fne-grained signal that can serve a multitude of social functions in order to regulate group structure and hierarchy.German Research Foundation (SZ 267/1-1; DP Szameitat).https://doi.org/10.17633/rd.brunel. 1502829

    Speaker Sex Perception from Spontaneous and Volitional Nonverbal Vocalizations.

    Get PDF
    In two experiments, we explore how speaker sex recognition is affected by vocal flexibility, introduced by volitional and spontaneous vocalizations. In Experiment 1, participants judged speaker sex from two spontaneous vocalizations, laughter and crying, and volitionally produced vowels. Striking effects of speaker sex emerged: For male vocalizations, listeners' performance was significantly impaired for spontaneous vocalizations (laughter and crying) compared to a volitional baseline (repeated vowels), a pattern that was also reflected in longer reaction times for spontaneous vocalizations. Further, performance was less accurate for laughter than crying. For female vocalizations, a different pattern emerged. In Experiment 2, we largely replicated the findings of Experiment 1 using spontaneous laughter, volitional laughter and (volitional) vowels: here, performance for male vocalizations was impaired for spontaneous laughter compared to both volitional laughter and vowels, providing further evidence that differences in volitional control over vocal production may modulate our ability to accurately perceive speaker sex from vocal signals. For both experiments, acoustic analyses showed relationships between stimulus fundamental frequency (F0) and the participants' responses. The higher the F0 of a vocal signal, the more likely listeners were to perceive a vocalization as being produced by a female speaker, an effect that was more pronounced for vocalizations produced by males. We discuss the results in terms of the availability of salient acoustic cues across different vocalizations

    Acoustic profiles of distinct emotional expressions in laughter.

    No full text
    Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech

    Acoustic profiles of distinct emotional expressions in laughter

    No full text
    Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech

    Format characteristics of human laughter

    No full text
    Although laughter is an important aspect of nonverbal vocalization, its acoustic properties are still not fully understood. Extreme articulation during laughter production, such as wide jaw opening, suggests that laughter can have very high first formant (F1) frequencies. We measured fundamental frequency and formant frequencies of the vowels produced in the vocalic segments of laughter. Vocalic segments showed higher average F1 frequencies than those previously reported and individual values could be as high as 1100 Hz for male speakers and 1500 Hz for female speakers. To our knowledge, these are the highest F1 frequencies reported to date for human vocalizations, exceeding even the F1 frequencies reported for trained soprano singers. These exceptionally high F1 values are likely to be based on the extreme positions adopted by the vocal tract during laughter in combination with physiological constraints accompanying the production of a “pressed” voice

    It is not always tickling: distinct cerebral responses during perception of different laughter types.

    No full text
    Laughter is highly relevant for social interaction in human beings and non-human primates. In humans as well as in non-human primates laughter can be induced by tickling. Human laughter, however, has further diversified and encompasses emotional laughter types with various communicative functions, e.g. joyful and taunting laughter. Here, it was evaluated if this evolutionary diversification of ecological functions is associated with distinct cerebral responses underlying laughter perception. Functional MRI revealed a double-dissociation of cerebral responses during perception of tickling laughter and emotional laughter (joy and taunt) with higher activations in the anterior rostral medial frontal cortex (arMFC) when emotional laughter was perceived, and stronger responses in the right superior temporal gyrus (STG) during appreciation of tickling laughter. Enhanced activation of the arMFC for emotional laughter presumably reflects increasing demands on social cognition processes arising from the greater social salience of these laughter types. Activation increase in the STG for tickling laughter may be linked to the higher acoustic complexity of this laughter type. The observed dissociation of cerebral responses for emotional laughter and tickling laughter was independent of task-directed focusing of attention. These findings support the postulated diversification of human laughter in the course of evolution from an unequivocal play signal to laughter with distinct emotional contents subserving complex social functions

    Format characteristics of human laughter

    Get PDF
    Although laughter is an important aspect of nonverbal vocalization, its acoustic properties are still not fully understood. Extreme articulation during laughter production, such as wide jaw opening, suggests that laughter can have very high first formant (F1) frequencies. We measured fundamental frequency and formant frequencies of the vowels produced in the vocalic segments of laughter. Vocalic segments showed higher average F1 frequencies than those previously reported and individual values could be as high as 1100 Hz for male speakers and 1500 Hz for female speakers. To our knowledge, these are the highest F1 frequencies reported to date for human vocalizations, exceeding even the F1 frequencies reported for trained soprano singers. These exceptionally high F1 values are likely to be based on the extreme positions adopted by the vocal tract during laughter in combination with physiological constraints accompanying the production of a “pressed” voice
    corecore