143,127 research outputs found

    EMIR: A novel emotion-based music retrieval system

    Get PDF
    Music is inherently expressive of emotion meaning and affects the mood of people. In this paper, we present a novel EMIR (Emotional Music Information Retrieval) System that uses latent emotion elements both in music and non-descriptive queries (NDQs) to detect implicit emotional association between users and music to enhance Music Information Retrieval (MIR). We try to understand the latent emotional intent of queries via machine learning for emotion classification and compare the performance of emotion detection approaches on different feature sets. For this purpose, we extract music emotion features from lyrics and social tags crawled from the Internet, label some for training and model them in high-dimensional emotion space and recognize latent emotion of users by query emotion analysis. The similarity between queries and music is computed by verified BM25 model

    Eight Dimensions for the Emotions

    Get PDF
    The author proposes a dimensional model of our emotion concepts that is intended to be largely independent of one’s theory of emotions and applicable to the different ways in which emotions are measured. He outlines some conditions for selecting the dimensions based on these motivations and general conceptual grounds. Given these conditions he then advances an 8-dimensional model that is shown to effectively differentiate emotion labels both within and across cultures, as well as more obscure expressive language. The 8 dimensions are: (1) attracted—repulsed, (2) powerful—weak, (3) free—constrained, (4) certain—uncertain, (5) generalized—focused, (6) future directed—past directed, (7) enduring—sudden, (8) socially connected—disconnected

    A Robust Method for Speech Emotion Recognition Based on Infinite Student’s t

    Get PDF
    Speech emotion classification method, proposed in this paper, is based on Student’s t-mixture model with infinite component number (iSMM) and can directly conduct effective recognition for various kinds of speech emotion samples. Compared with the traditional GMM (Gaussian mixture model), speech emotion model based on Student’s t-mixture can effectively handle speech sample outliers that exist in the emotion feature space. Moreover, t-mixture model could keep robust to atypical emotion test data. In allusion to the high data complexity caused by high-dimensional space and the problem of insufficient training samples, a global latent space is joined to emotion model. Such an approach makes the number of components divided infinite and forms an iSMM emotion model, which can automatically determine the best number of components with lower complexity to complete various kinds of emotion characteristics data classification. Conducted over one spontaneous (FAU Aibo Emotion Corpus) and two acting (DES and EMO-DB) universal speech emotion databases which have high-dimensional feature samples and diversiform data distributions, the iSMM maintains better recognition performance than the comparisons. Thus, the effectiveness and generalization to the high-dimensional data and the outliers are verified. Hereby, the iSMM emotion model is verified as a robust method with the validity and generalization to outliers and high-dimensional emotion characters

    The Many Moods of Emotion

    Full text link
    This paper presents a novel approach to the facial expression generation problem. Building upon the assumption of the psychological community that emotion is intrinsically continuous, we first design our own continuous emotion representation with a 3-dimensional latent space issued from a neural network trained on discrete emotion classification. The so-obtained representation can be used to annotate large in the wild datasets and later used to trained a Generative Adversarial Network. We first show that our model is able to map back to discrete emotion classes with a objectively and subjectively better quality of the images than usual discrete approaches. But also that we are able to pave the larger space of possible facial expressions, generating the many moods of emotion. Moreover, two axis in this space may be found to generate similar expression changes as in traditional continuous representations such as arousal-valence. Finally we show from visual interpretation, that the third remaining dimension is highly related to the well-known dominance dimension from psychology

    Emotional valence and arousal affect reading in an interactive way: neuroimaging evidence for an approach-withdrawal framework

    Get PDF
    A growing body of literature shows that the emotional content of verbal material affects reading, wherein emotional words are given processing priority compared to neutral words. Human emotions can be conceptualised within a two-dimensional model comprised of emotional valence and arousal (intensity). These variables are at least in part distinct, but recent studies report interactive effects during implicit emotion processing and relate these to stimulus-evoked approach-withdrawal tendencies. The aim of the present study was to explore how valence and arousal interact at the neural level, during implicit emotion word processing. The emotional attributes of written word stimuli were orthogonally manipulated based on behavioural ratings from a corpus of emotion words. Stimuli were presented during an fMRI experiment while 16 participants performed a lexical decision task, which did not require explicit evaluation of a word's emotional content. Results showed greater neural activation within right insular cortex in response to stimuli evoking conflicting approach-withdrawal tendencies (i.e., positive high-arousal and negative low-arousal words) compared to stimuli evoking congruent approach vs. withdrawal tendencies (i.e., positive low-arousal and negative high-arousal words). Further, a significant cluster of activation in the left extra-striate cortex was found in response to emotional than neutral words, suggesting enhanced perceptual processing of emotionally salient stimuli. These findings support an interactive two-dimensional approach to the study of emotion word recognition and suggest that the integration of valence and arousal dimensions recruits a brain region associated with interoception, emotional awareness and sympathetic functions

    A Multi-layer Hybrid Framework for Dimensional Emotion Classification

    Get PDF
    This paper investigates dimensional emotion prediction and classification from naturalistic facial expressions. Similarly to many pattern recognition problems, dimensional emotion classification requires generating multi-dimensional outputs. To date, classification for valence and arousal dimensions has been done separately, assuming that they are independent. However, various psychological findings suggest that these dimensions are correlated. We therefore propose a novel, multi-layer hybrid framework for emotion classification that is able to model inter-dimensional correlations. Firstly, we derive a novel geometric feature set based on the (a)symmetric spatio-temporal characteristics of facial expressions. Subsequently, we use the proposed feature set to train a multi-layer hybrid framework composed of a tem- poral regression layer for predicting emotion dimensions, a graphical model layer for modeling valence-arousal correlations, and a final classification and fusion layer exploiting informative statistics extracted from the lower layers. This framework (i) introduces the Auto-Regressive Coupled HMM (ACHMM), a graphical model specifically tailored to accommodate not only inter-dimensional correlations but also to exploit the internal dynamics of the actual observations, and (ii) replaces the commonly used Maximum Likelihood principle with a more robust final classification and fusion layer. Subject-independent experimental validation, performed on a naturalistic set of facial expressions, demonstrates the effectiveness of the derived feature set, and the robustness and flexibility of the proposed framework

    Integrating Kansei Engineering into Kano and SERVQUAL Model to Determine the Priorities of Service Improvement (Case Study: CafĂ© Agape at Ruteng, East Nusa Tenggara – Indonesia)

    Get PDF
    In order to improve service quality, a research framework of integrating Kansei Engineering into Kano and SERV-QUAL was deployed in CafĂ© Agape. SERVQUAL model is used to identify whether the provided service has been fulfill customer needs; whether customers were satisfied and what service attributes that have negative customer satisfaction indexes. Kano model can classify the service attributes into groups, i.e., attractive, one dimensional, must be or even indifferent; the classification can be used to determine the priorities. Kansei Engineering takes the customer emotion into account and tries to identify customer needs (feelings) more specific. The integration is aimed to determine the improvement priorities. A survey of 100 customers using 21 service attributes and 10 Kansei words resulted on 15 attributes that have negative customer satisfaction score. However, only 9 attributes will be prioritized for improvement because they are Attractive (A) and One dimensional (O) attributes due to the result of Kano classification. The analysis of Kansei Engineering showed that “convenience” was the customers’ most important emotion when they receive services at CafĂ© Agape. Meanwhile, there are 6 of 10 Kansei word (customer emotional needs) significantly different between two groups of CafĂ© Agape’s customers; foreign/overseas customers felt happier, more relieved, friendly, welcome and attractive but less sedate/quiet than local/domestic ones when they consume services at CafĂ© Agape
    • 

    corecore