44 research outputs found

    Posed and spontaneous nonverbal vocalizations of positive emotions: Acoustic analysis and perceptual judgments

    Get PDF
    When experiencing different positive emotional states, like amusement or relief, we may produce nonverbal vocalizations such as laughs and sighs. In the current study, we describe the acoustic structure of posed and spontaneous nonverbal vocalizations of 14 different positive emotions, and test whether listeners (N =201) map the vocalizations to emotions. The results show that vocalizations of 13 different positive emotions were recognized at better-than-chance levels, but not vocalizations of being moved. Emotions varied in whether vocalizations were better recognized from spontaneous or posed expressions

    Posed and spontaneous nonverbal vocalizations of positive emotions: Acoustic analysis and perceptual judgments

    Get PDF
    When experiencing different positive emotional states, like amusement or relief, we may produce nonverbal vocalizations such as laughs and sighs. In the current study, we describe the acoustic structure of posed and spontaneous nonverbal vocalizations of 14 different positive emotions, and test whether listeners (N =201) map the vocalizations to emotions. The results show that vocalizations of 13 different positive emotions were recognized at better-than-chance levels, but not vocalizations of being moved. Emotions varied in whether vocalizations were better recognized from spontaneous or posed expressions

    Human listeners' perception of behavioural context and core affect dimensions in chimpanzee vocalizations

    Get PDF
    Vocalizations linked to emotional states are partly conserved among phylogenetically related species. This continuity may allow humans to accurately infer affective information from vocalizations produced by chimpanzees. In two pre-registered experiments, we examine human listeners' ability to infer behavioural contexts (e.g. discovering food) and core affect dimensions (arousal and valence) from 155 vocalizations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium or low arousal levels. In experiment 1, listeners (n = 310), categorized the vocalizations in a forced-choice task with 10 response options, and rated arousal and valence. In experiment 2, participants (n = 3120) matched vocalizations to production contexts using yes/no response options. The results show that listeners were accurate at matching vocalizations of most contexts in addition to inferring arousal and valence. Judgments were more accurate for negative as compared to positive vocalizations. An acoustic analysis demonstrated that, listeners made use of brightness and duration cues, and relied on noisiness in making context judgements, and pitch to infer core affect dimensions. Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalizations beyond core affect, indicating phylogenetic continuity in the mapping of vocalizations to behavioural contexts

    Threat vocalisations are acoustically similar between humans (Homo sapiens) and chimpanzees (Pan troglodytes)

    Get PDF
    In behavioural contexts like fighting, eating, and playing, acoustically distinctive vocalisations are produced across many mammalian species. Such expressions may be conserved in evolution, pointing to the possibility of acoustic regularities in the vocalisations of phylogenetically related species. Here, we test this hypothesis by comparing the degree of acoustic similarity between human and chimpanzee vocalisations produced in 10 similar behavioural contexts. We use two complementary analysis methods: Pairwise acoustic distance measures and acoustic separability metrics based on unsupervised learning algorithms. Cross-context analysis revealed that acoustic features of vocalisations produced when threatening another individual were distinct from other types of vocalisations and highly similar across species. Using a multimethod approach, these findings demonstrate that human vocalisations produced when threatening another person are acoustically similar to chimpanzee vocalisations in the same situation as compared to other types of vocalisations, likely reflecting a phylogenetically ancient vocal signalling system

    How emotion is experienced and expressed in multiple cultures: a large-scale experiment across North America, Europe, and Japan

    Get PDF
    Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants’ self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies—many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion

    What’s embodied in a smile? Commentary on target article by Paula M. Niedenthal

    Get PDF
    Contains fulltext : M_329025.pdf (publisher's version ) (Open Access)2 p

    Sauter & Russell Handbook of Emotion Theory

    No full text
    What is a nonverbal expression of emotion? Both the notion of expression and the notion of emotion are contentious in the literature. Everyone knows the clear cases – smiles, frowns, screams, chuckles, slumps, and so on – but the category as a whole is not well defined. Writers from different theoretical backgrounds have criticized the implicit assumptions inherent in this phrase (Ekman, 1971; Hinde, 1985; Parkinson, 2005; Zajonc, 1985). Are the referenced behaviors in fact expressing something, and is this something an emotion? Not all scientific accounts are consistent with the implication that certain nonverbal behaviors express an emotion. However, for simplicity of reference we will continue to use the phrase “nonverbal expression.” But we do so in an inverted-commas sense only, namely, to refer to those nonverbal behaviors that are commonly taken to express emotions. We acknowledge that the category is vague, and we remain agnostic on whether what is expressed is truly an emotion, or, indeed, whether “express” is what such behaviors do. We are similarly agnostic on the definition of emotion, and we do not use that word here in any technical sense. Instead, our focus in this chapter is on short-term emotion episodes, which we take to be multi-componential events of limited duration commonly taken to be an emotion. Components include but are not limited to appraisals, physiological changes, subjective experiences, nonverbal expressions, and instrumental behaviors. We now turn to summarizing how the basic emotion, appraisal, and psychological constructionist research programs account for the production and perception of nonverbal expressions. (See chapters in this volume by Shiota, Ellsworth, and Barrett, respectively, for more general discussions of each research program and for fuller sets of references.) Although each program is commonly called a theory, they are instead broad research programs: each includes a family of loosely related (indeed sometimes conflicting) theories and assumptions, an interpretation of the history of the field, various background assumptions about human nature, prescribed methods and data analytic procedures, and conclusions drawn from previous research. Furthermore, each program continues to develop. We present a prototypical version of each program, emphasizing differences among the three research programs. That said, the three research programs also share important assumptions, methods, and conclusions, although the emphasis may vary. For example, when we describe one program’s account of evolutionary origins, the reader should not infer that the other two programs reject evolution by natural selection or assume special creation. Similarly, the fact that one program emphasizes context does not mean that contextual effects are incompatible with the other programs. We present each program’s assertions as if they were established facts, but in fact they are hypotheses. In the conclusion to our chapter, we elaborate on compatibilities and convergences, but we begin by contrasting the three programs

    Telling Friend from Foe: Listeners Are Unable to Identify In-Group and Out-Group Members from Heard Laughter

    Get PDF
    Group membership is important for how we perceive others, but although perceivers can accurately infer group membership from facial expressions and spoken language, it is not clear whether listeners can identify in- and out-group members from non-verbal vocalizations. In the current study, we examined perceivers' ability to identify group membership from non-verbal vocalizations of laughter, testing the following predictions: (1) listeners can distinguish between laughter from different nationalities and (2) between laughter from their in-group, a close out-group, and a distant out-group, and (3) greater exposure to laughter from members of other cultural groups is associated with better performance. Listeners (n = 814) took part in an online forced-choice classification task in which they were asked to judge the origin of 24 laughter segments. The responses were analyzed using frequentist and Bayesian statistical analyses. Both kinds of analyses showed that listeners were unable to accurately identify group identity from laughter. Furthermore, exposure did not affect performance. These results provide a strong and clear demonstration that group identity cannot be inferred from laughter

    Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations

    No full text
    When we hear another person laugh or scream, can we tell the kind of situation they are in – for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others’ vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.</p

    Categorical perception of emotional expressions does not require lexical categories

    Get PDF
    Does our perception of others’ emotional signals depend on the language we speak or is our perception the same regardless of language and culture? It is well established that human emotional facial expressions are perceived categorically by viewers, but whether this is driven by perceptual or linguistic mechanisms is debated. We report an investigation into the perception of emotional facial expressions, comparing German speakers to native speakers of Yucatec Maya, a language with no lexical labels that distinguish disgust from anger. In a free naming task, speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger. However, in a delayed match-to-sample task, both groups perceived emotional facial expressions of these and other emotions categorically. The magnitude of this effect was equivalent across the language groups, as well as across emotion continua with and without lexical distinctions. Our results show that the perception of affective signals is not driven by lexical labels, instead lending support to accounts of emotions as a set of biologically evolved mechanisms
    corecore