7,604 research outputs found

    The effect of prosody on decision making

    Get PDF
    MPA Regional Research AwardThis study sought to induce mood through affective prosody and then measure whether this had a significant effect on decision making. Prosody can be defined as tone, rate, or stress patterns that occur during speech. Prosody, when used to convey emotion, is termed affective prosody. Prior research suggests that mood is a viable predictor of performance on risky decision making tasks. More specifically, positive mood has been linked with heuristic processing, which relies on emotional reasoning. Initially, individuals feel more averse to losses and more pleased with wins, leading to more advantageous decision making. Negative emotion has been linked with substantive/systematic processing; individuals tend to exhibit more disadvantageous decision making initially in an effort to determine the most favorable outcome. This study investigated whether affective prosody alone could directly induce mood and thereby alter performance on an unrelated decision making task. This study utilized the Hungry Donkey Task, which is adapted for use with both children and adults to measure risk taking. It was hypothesized that adults induced with positive affective prosody would make more favorable decisions in early trials, while those induced with negative affective prosody would make riskier decisions initially. Mood was successfully induced with affective prosody such that participants in the positive condition reported more positive self-report mood than the negative condition. Results do not support the hypothesis; rather display negative affective prosody as eliciting better decision making in both the early and later trials. This may be a result of the positive condition relying on heuristic processing, which may have led to less advantageous decisions. This study helps build a greater understanding of the effects of mood on risky decision making and lends support to the claim that affective prosody can serve as an influencing factor in others’ behavior.No embargoAcademic Major: Psycholog

    Predicting continuous conflict perception with Bayesian Gaussian processes

    Get PDF
    Conflict is one of the most important phenomena of social life, but it is still largely neglected by the computing community. This work proposes an approach that detects common conversational social signals (loudness, overlapping speech, etc.) and predicts the conflict level perceived by human observers in continuous, non-categorical terms. The proposed regression approach is fully Bayesian and it adopts Automatic Relevance Determination to identify the social signals that influence most the outcome of the prediction. The experiments are performed over the SSPNet Conflict Corpus, a publicly available collection of 1430 clips extracted from televised political debates (roughly 12 hours of material for 138 subjects in total). The results show that it is possible to achieve a correlation close to 0.8 between actual and predicted conflict perception

    Language discrimination by newborns: Teasing apart phonotactic, rhythmic, and intonational cues

    Get PDF
    Speech rhythm has long been claimed to be a useful bootstrapping cue in the very first steps of language acquisition. Previous studies have suggested that newborn infants do categorize varieties of speech rhythm, as demonstrated by their ability to discriminate between certain languages. However, the existing evidence is not unequivocal: in previous studies, stimuli discriminated by newborns always contained additional speech cues on top of rhythm. Here, we conducted a series of experiments assessing discrimination between Dutch and Japanese by newborn infants, using a speech resynthesis technique to progressively degrade non-rhythmical properties of the sentences. When the stimuli are resynthesized using identical phonemes and artificial intonation contours for the two languages, thereby preserving only their rhythmic and broad phonotactic structure, newborns still seem to be able to discriminate between the two languages, but the effect is weaker than when intonation is present. This leaves open the possibility that the temporal correlation between intonational and rhythmic cues might actually facilitate the processing of speech rhythm

    Employing Emotion Cues to Verify Speakers in Emotional Talking Environments

    Full text link
    Usually, people talk neutrally in environments where there are no abnormal talking conditions such as stress and emotion. Other emotional conditions that might affect people talking tone like happiness, anger, and sadness. Such emotions are directly affected by the patient health status. In neutral talking environments, speakers can be easily verified, however, in emotional talking environments, speakers cannot be easily verified as in neutral talking ones. Consequently, speaker verification systems do not perform well in emotional talking environments as they do in neutral talking environments. In this work, a two-stage approach has been employed and evaluated to improve speaker verification performance in emotional talking environments. This approach employs speaker emotion cues (text-independent and emotion-dependent speaker verification problem) based on both Hidden Markov Models (HMMs) and Suprasegmental Hidden Markov Models (SPHMMs) as classifiers. The approach is comprised of two cascaded stages that combines and integrates emotion recognizer and speaker recognizer into one recognizer. The architecture has been tested on two different and separate emotional speech databases: our collected database and Emotional Prosody Speech and Transcripts database. The results of this work show that the proposed approach gives promising results with a significant improvement over previous studies and other approaches such as emotion-independent speaker verification approach and emotion-dependent speaker verification approach based completely on HMMs.Comment: Journal of Intelligent Systems, Special Issue on Intelligent Healthcare Systems, De Gruyter, 201
    • …
    corecore