6,067 research outputs found

    Fusion of Learned Multi-Modal Representations and Dense Trajectories for Emotional Analysis in Videos

    Get PDF
    When designing a video affective content analysis algorithm, one of the most important steps is the selection of discriminative features for the effective representation of video segments. The majority of existing affective content analysis methods either use low-level audio-visual features or generate handcrafted higher level representations based on these low-level features. We propose in this work to use deep learning methods, in particular convolutional neural networks (CNNs), in order to automatically learn and extract mid-level representations from raw data. To this end, we exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the HSV color space. We also incorporate dense trajectory based motion features in order to further enhance the performance of the analysis. By means of multi-class support vector machines (SVMs) and fusion mechanisms, music video clips are classified into one of four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results obtained on a subset of the DEAP dataset show (1) that higher level representations perform better than low-level features, and (2) that incorporating motion information leads to a notable performance gain, independently from the chosen representation

    An emotional mess! Deciding on a framework for building a Dutch emotion-annotated corpus

    Get PDF
    Seeing the myriad of existing emotion models, with the categorical versus dimensional opposition the most important dividing line, building an emotion-annotated corpus requires some well thought-out strategies concerning framework choice. In our work on automatic emotion detection in Dutch texts, we investigate this problem by means of two case studies. We find that the labels joy, love, anger, sadness and fear are well-suited to annotate texts coming from various domains and topics, but that the connotation of the labels strongly depends on the origin of the texts. Moreover, it seems that information is lost when an emotional state is forcedly classified in a limited set of categories, indicating that a bi-representational format is desirable when creating an emotion corpus.Seeing the myriad of existing emotion models, with the categorical versus dimensional opposition the most important dividing line, building an emotion-annotated corpus requires some well thought-out strategies concerning framework choice. In our work on automatic emotion detection in Dutch texts, we investigate this problem by means of two case studies. We find that the labels joy, love, anger, sadness and fear are well-suited to annotate texts coming from various domains and topics, but that the connotation of the labels strongly depends on the origin of the texts. Moreover, it seems that information is lost when an emotional state is forcedly classified in a limited set of categories, indicating that a bi-representational format is desirable when creating an emotion corpus.P

    An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss

    Full text link
    Affect conveys important implicit information in human communication. Having the capability to correctly express affect during human-machine conversations is one of the major milestones in artificial intelligence. In recent years, extensive research on open-domain neural conversational models has been conducted. However, embedding affect into such models is still under explored. In this paper, we propose an end-to-end affect-rich open-domain neural conversational model that produces responses not only appropriate in syntax and semantics, but also with rich affect. Our model extends the Seq2Seq model and adopts VAD (Valence, Arousal and Dominance) affective notations to embed each word with affects. In addition, our model considers the effect of negators and intensifiers via a novel affective attention mechanism, which biases attention towards affect-rich words in input sentences. Lastly, we train our model with an affect-incorporated objective function to encourage the generation of affect-rich words in the output responses. Evaluations based on both perplexity and human evaluations show that our model outperforms the state-of-the-art baseline model of comparable size in producing natural and affect-rich responses.Comment: AAAI-1

    Encouraging the perceptual underdog: positive affective priming of nonpreferred local–global processes

    Get PDF
    Two experiments examined affective priming of global and local perception. Participants attempted to detect a target that might be present as either a global or a local shape. Verbal primes were used in 1 experiment, and pictorial primes were used in the other. In both experiments, positive primes led to improved performance on the nonpreferred dimension. For participants exhibiting global precedence, detection of local targets was significantly improved, whereas for participants exhibiting local precedence, detection of global targets was significantly improved. The results provide support for an interpretation of the effects of positive affective priming in terms of increased perceptual flexibility

    How Do Induced Affective States Bias Emotional Contagion to Faces? A Three-Dimensional Model

    Get PDF
    Affective states can propagate in a group of people and influence their ability to judge others’ affective states. In the present paper, we present a simple mathematical model to describe this process in a three-dimensional affective space. We obtained data from 67 participants randomly assigned to two experimental groups. Participants watched either an upsetting or uplifting video previously calibrated for this goal. Immediately, participants reported their baseline subjective affect in three dimensions: (1) positivity, (2) negativity, and (3) arousal. In a second phase, participants rated the affect they subjectively judged from 10 target angry faces and ten target happy faces in the same three-dimensional scales. These judgments were used as an index of participant’s affective state after observing the faces. Participants’ affective responses were subsequently mapped onto a simple three-dimensional model of emotional contagion, in which the shortest distance between the baseline self-reported affect and the target judgment was calculated. The results display a double dissociation: negatively induced participants show more emotional contagion to angry than happy faces, while positively induced participants show more emotional contagion to happy than angry faces. In sum, emotional contagion exerted by the videos selectively affected judgments of the affective state of others’ faces. We discuss the directionality of emotional contagion to faces, considering whether negative emotions are more easily propagated than positive ones. Additionally, we comment on the lack of significant correlations between our model and standardized tests of empathy and emotional contagion.DFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische Universität Berli
    • …
    corecore