1,997 research outputs found

    Definición de disparador de emoción asociado a la cultura y aplicación a la clasificación de la valencia y la emoción en textos

    Get PDF
    Este artículo presenta un método de identificación y clasificación de la valencia y las emociones presentes en un texto. Para ello, se introduce un nuevo concepto denominado disparador de emoción. Inicialmente, se construye de forma incremental una base de datos léxica de disparadores de emoción asociados a la cultura con la que se quiere trabajar, basándose en tres teorías diferentes: la Teoría de la Relevancia de Pragmática, la Teoría de la Motivación de Maslow de Psicología y la Teoría de Necesidades de Neef de Economía. La base de datos creada parte de un conjunto inicial de términos y es ampliada con la información de otros recursos léxicos, como WordNet, NomLex y dominios relevantes. El enlace entre idiomas se hace por medio de EuroWordNet y se completa y adapta a diversas culturas con bases de conocimiento específicas para cada lengua. También, se demuestra cómo la base de datos construida puede ser utilizada para buscar en textos la valencia (polaridad) y el significado afectivo. Finalmente, se evalúa el método utilizando los datos de prueba de la tarea nº 14 de Semeval “Texto afectivo” y su traducción al español. Los resultados y las mejoras se presentan junto con una discusión en la que se tratan los puntos fuertes y débiles del método y las directrices para el trabajo futuro.This paper presents a method to automatically spot and classify the valence and emotions present in written text, based on a concept we introduced - of emotion triggers. The first step consists of incrementally building a culture dependent lexical database of emotion triggers, emerging from the theory of relevance from pragmatics, Maslow´s theory of human needs from psychology and Neef´s theory of human needs in economics. We start from a core of terms and expand them using lexical resources such as WordNet, completed by NomLex, sense number disambiguated using the Relevant Domains concept. The mapping among languages is accomplished using EuroWordNet and the completion and projection to different cultures is done through language-specific commonsense knowledge bases. Subsequently, we show the manner in which the constructed database can be used to mine texts for valence (polarity) and affective meaning. An evaluation is performed on the Semeval Task No. 14: Affective Text test data and their corresponding translation to Spanish. The results and improvements are presented together with an argument on the strong and weak points of the method and the directions for future work

    Multimodal Emotion Recognition among Couples from Lab Settings to Daily Life using Smartwatches

    Full text link
    Couples generally manage chronic diseases together and the management takes an emotional toll on both patients and their romantic partners. Consequently, recognizing the emotions of each partner in daily life could provide an insight into their emotional well-being in chronic disease management. The emotions of partners are currently inferred in the lab and daily life using self-reports which are not practical for continuous emotion assessment or observer reports which are manual, time-intensive, and costly. Currently, there exists no comprehensive overview of works on emotion recognition among couples. Furthermore, approaches for emotion recognition among couples have (1) focused on English-speaking couples in the U.S., (2) used data collected from the lab, and (3) performed recognition using observer ratings rather than partner's self-reported / subjective emotions. In this body of work contained in this thesis (8 papers - 5 published and 3 currently under review in various journals), we fill the current literature gap on couples' emotion recognition, develop emotion recognition systems using 161 hours of data from a total of 1,051 individuals, and make contributions towards taking couples' emotion recognition from the lab which is the status quo, to daily life. This thesis contributes toward building automated emotion recognition systems that would eventually enable partners to monitor their emotions in daily life and enable the delivery of interventions to improve their emotional well-being.Comment: PhD Thesis, 2022 - ETH Zuric

    EmotiBlog: towards a finer-grained sentiment analysis and its application to opinion mining

    Get PDF
    Comunicación presentada en las IV Jornadas TIMM, Torres (Jaén), 7-8 abril 2011.EmotiBlog is a corpus designed for Sentiment Analysis research. Preliminary studies demonstrated its relevance as a Machine Learning resource for detecting subjective information. In this paper we explore additional features by a detailed analysis. In addition, we compare EmotiBlog with other well-known Sentiment Analysis resource such as the JRC corpus. Finally, as a result of our research, we developed an Opinion Mining application, which takes into account user opinions when rating the results of a search engine specialized in mobile phones

    Emotions Trump Facts: The Role of Emotions in on Social Media: A Literature Review

    Get PDF
    Emotions are an inseparable part of how people use social media. While a more cognitive view on social media has initially dominated the research looking into areas such as knowledge sharing, the topic of emotions and their role on social media is gaining increasing interest. As is typical to an emerging field, there is no synthesized view on what has been discovered so far and - more importantly - what has not been. This paper provides an overview of research regarding expressing emotions on social media and their impact, and makes recommendations for future research in the area. Considering differentiated emotion instead of measuring positive or negative sentiment, drawing from theories on emotion, and distinguishing between sentiment and opinion could provide valuable insights in the field

    Music emotion recognition: a multimodal machine learning approach

    Get PDF
    Music emotion recognition (MER) is an emerging domain of the Music Information Retrieval (MIR) scientific community, and besides, music searches through emotions are one of the major selection preferred by web users. As the world goes to digital, the musical contents in online databases, such as Last.fm have expanded exponentially, which require substantial manual efforts for managing them and also keeping them updated. Therefore, the demand for innovative and adaptable search mechanisms, which can be personalized according to users’ emotional state, has gained increasing consideration in recent years. This thesis concentrates on addressing music emotion recognition problem by presenting several classification models, which were fed by textual features, as well as audio attributes extracted from the music. In this study, we build both supervised and semisupervised classification designs under four research experiments, that addresses the emotional role of audio features, such as tempo, acousticness, and energy, and also the impact of textual features extracted by two different approaches, which are TF-IDF and Word2Vec. Furthermore, we proposed a multi-modal approach by using a combined feature-set consisting of the features from the audio content, as well as from context-aware data. For this purpose, we generated a ground truth dataset containing over 1500 labeled song lyrics and also unlabeled big data, which stands for more than 2.5 million Turkish documents, for achieving to generate an accurate automatic emotion classification system. The analytical models were conducted by adopting several algorithms on the crossvalidated data by using Python. As a conclusion of the experiments, the best-attained performance was 44.2% when employing only audio features, whereas, with the usage of textual features, better performances were observed with 46.3% and 51.3% accuracy scores considering supervised and semi-supervised learning paradigms, respectively. As of last, even though we created a comprehensive feature set with the combination of audio and textual features, this approach did not display any significant improvement for classification performanc

    An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games

    Get PDF
    Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor

    Player agency in interactive narrative: audience, actor & author

    Get PDF
    The question motivating this review paper is, how can computer-based interactive narrative be used as a constructivist learn- ing activity? The paper proposes that player agency can be used to link interactive narrative to learner agency in constructivist theory, and to classify approaches to interactive narrative. The traditional question driving research in interactive narrative is, ‘how can an in- teractive narrative deal with a high degree of player agency, while maintaining a coherent and well-formed narrative?’ This question derives from an Aristotelian approach to interactive narrative that, as the question shows, is inherently antagonistic to player agency. Within this approach, player agency must be restricted and manip- ulated to maintain the narrative. Two alternative approaches based on Brecht’s Epic Theatre and Boal’s Theatre of the Oppressed are reviewed. If a Boalian approach to interactive narrative is taken the conflict between narrative and player agency dissolves. The question that emerges from this approach is quite different from the traditional question above, and presents a more useful approach to applying in- teractive narrative as a constructivist learning activity

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
    corecore