17 research outputs found

    General Purpose Textual Sentiment Analysis and Emotion Detection Tools

    Get PDF
    Textual sentiment analysis and emotion detection consists in retrieving the sentiment or emotion carried by a text or document. This task can be useful in many domains: opinion mining, prediction, feedbacks, etc. However, building a general purpose tool for doing sentiment analysis and emotion detection raises a number of issues, theoretical issues like the dependence to the domain or to the language but also pratical issues like the emotion representation for interoperability. In this paper we present our sentiment/emotion analysis tools, the way we propose to circumvent the di culties and the applications they are used for.Comment: Workshop on Emotion and Computing (2013

    Sharing Video Emotional Information in the Web

    Get PDF
    Video growth over the Internet changed the way users search, browse and view video content. Watching movies over the Internet is increasing and becoming a pastime. The possibility of streaming Internet content to TV, advances in video compression techniques and video streaming have turned this recent modality of watching movies easy and doable. Web portals as a worldwide mean of multimedia data access need to have their contents properly classified in order to meet users’ needs and expectations. The authors propose a set of semantic descriptors based on both user physiological signals, captured while watching videos, and on video low-level features extraction. These XML based descriptors contribute to the creation of automatic affective meta-information that will not only enhance a web-based video recommendation system based in emotional information, but also enhance search and retrieval of videos affective content from both users’ personal classifications and content classifications in the context of a web portal.info:eu-repo/semantics/publishedVersio

    Ontology-Based Method for Analysis of Inconsistency Factors in Emotion Recognition

    Get PDF
    In the paper the problem of inconsistency in emotion recognition is approached. One of the existing challenges is the exploration of factors, which can influence the inconsistency. Therefore the aim of the paper is to present a method that allows capturing knowledge of what factors and what values of these factors influence the inconsistencies between recognized emotional states. The high-level, semi-automatic method allowing to recognize these factors is presented. The input of the method is the structured dataset and the output is the set of rules identifying when recognized emotional states are consistent or not. The presented method is validated for the dataset prepared for emotion recognition from face expressions using various methods

    A framework for human-like behavior in an immersive virtual world

    Get PDF
    Just as readers feel immersed when the story-line adheres to their experiences, users will more easily feel immersed in a virtual environment if the behavior of the characters in that environment adheres to their expectations, based on their life-long observations in the real world. This paper introduces a framework that allows authors to establish natural, human-like behavior, physical interaction and emotional engagement of characters living in a virtual environment. Represented by realistic virtual characters, this framework allows people to feel immersed in an Internet based virtual world in which they can meet and share experiences in a natural way as they can meet and share experiences in real life. Rather than just being visualized in a 3D space, the virtual characters (autonomous agents as well as avatars representing users) in the immersive environment facilitate social interaction and multi-party collaboration, mixing virtual with real

    The SEMAINE API : a component integration framework for a naturally interacting and emotionally competent embodied conversational agent

    Get PDF
    The present thesis addresses the topic area of Embodied Conversational Agents (ECAs) with capabilities for natural interaction with a human user and emotional competence with respect to the perception and generation of emotional expressivity. The focus is on the technological underpinnings that facilitate the implementation of a real-time system with these capabilities, built from re-usable components. The thesis comprises three main contributions. First, it describes a new component integration framework, the SEMAINE API, which makes it easy to build emotion-oriented systems from components which interact with one another using standard and pre-standard XML representations. Second, it presents a prepare-and-trigger system architecture which substantially speeds up the time to animation for system utterances that can be pre-planned. Third, it reports on the W3C Emotion Markup Language, an upcoming web standard for representing emotions in technological systems. We assess critical aspects of system performance, showing that the framework provides a good basis for implementing real-time interactive ECA systems, and illustrate by means of three examples that the SEMAINE API makes it is easy to build new emotion-oriented systems from new and existing components.Die vorliegende Dissertation behandelt das Thema der virtuellen Agenten mit FĂ€higkeiten zur natĂŒrlichen Benutzer-Interaktion sowie emotionaler Kompetenz bzgl. der Wahrnehmung und Generierung emotionalen Ausdrucks. Der Schwerpunkt der Arbeit liegt auf den technologischen Grundlagen fĂŒr die Implementierung eines echtzeitfĂ€higen Systems mit diesen FĂ€higkeiten, das aus wiederverwendbaren Komponenten erstellt werden kann. Die Arbeit umfasst drei Kernaspekte. Zum Einen beschreibt sie ein neues Framework zur Komponenten-Integration, die SEMAINE API: Diese erleichtert die Erstellung von Emotions-orientierten Systemen aus Komponenten, die untereinander mittels Standard- oder PrĂ€-Standard-ReprĂ€sentationen kommunizieren. Zweitens wird eine Systemarchitektur vorgestellt, welche Vorbereitung und Auslösung von Systemverhalten entkoppelt und so zu einer substanziellen Beschleunigung der Generierungszeit fĂŒhrt, wenn SystemĂ€ußerungen im Voraus geplant werden können. Drittens beschreibt die Arbeit die W3C Emotion Markup Language, einen werdenden Web-Standard zur ReprĂ€sentation von Emotionen in technologischen Systemen. Es werden kritische Aspekte der Systemperformanz untersucht, wodurch gezeigt wird, dass das Framework eine gute Basis fĂŒr die Implementierung echtzeitfĂ€higer interaktiver Agentensysteme darstellt. Anhand von drei Beispielen wird illustriert, dass mit der SEMAINE API leicht neue Emotions-orientierte Systeme aus neuen und existierenden Komponenten erstellt werden können

    Use of Vocal Prosody to Express Emotions in Robotic Speech

    Get PDF
    Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential components of speech communication between humans. The objective of this dissertation research was to determine the efficacy of using varying vocal prosody in robotic speech to convey emotion. Two pilot studies and two experiments were performed to address the shortcomings of previous HRI research in this area. The pilot studies were used to determine a set of vocal prosody modification values for a female voice model using the MARY speech synthesizer to convey the emotions: anger, fear, happiness, and sadness. Experiment 1 validated that participants perceived these emotions along with a neutral vocal prosody at rates significantly higher than chance. Four of the vocal prosodies (anger, fear, neutral, and sadness) were recognized at rates approaching the recognition rate (60%) of emotions in person to person speech. During Experiment 2 the robot led participants through a creativity test while making statements using one of the validated emotional vocal prosodies. The ratings of the robot’s positive qualities and the creativity scores by the participant group that heard nonnegative vocal prosodies (happiness, neutral) did not significantly differ from the ratings and scores of the participant group that heard the negative vocal prosodies (anger, fear, sadness). Therefore, Experiment 2 failed to show that the use of emotional vocal prosody in a robot’s speech influenced the participants’ appraisal of the robot or the participants’ performance on this specific task. At this time robot designers and programmers should not expect that vocal prosody alone will have a significant impact on the acceptability or the quality of human-robot interactions. Further research is required to show that multi-modal (vocal prosody along with facial expressions, body language, or linguistic content) expressions of emotions by robots will be effective at improving human-robot interactions

    Emotion transfer protocol

    Get PDF
    A problem exists in computer-mediated communication (CMC). A distinct lack of presence and emotional nuance causes the quality of CMC to be shallower than face-to-face communication, causing misunderstandings and a lack of empathy. This thesis proposes a solution by widening the emotional bandwidth, with the help of augmenting the digital communication channel by utilizing new technologies and principles derived from scientific theory and practice in design. The goal of this thesis is to draft a proposal for a new internet protocol: the Emotion Transfer Protocol. Several questions need to be answered: How can emotions be described in an accurate and meaningful way? How can emotions be measured, transmitted, and represented? This thesis approaches these questions from an inclusive point of view, by considering different and even op-posing answers, leaving space for future work to expand and reduce the scope of the protocol. The protocol itself is divided into three components: input, transmission, and output. Each of the components is presented as a collection of approaches that are currently used in daily life, and in research to represent, map, and read emotions. An interesting finding that is present on all levels of emotion science and technology is a divide between unconscious and conscious representations, and this is also considered in the protocol by dividing it into an explicit and an implicit version A novel idea of unlabeled emotions is presented, meaning emotional representations that are left to be interpreted by the receiver. Unlabeled emotions and emotion transmission are explored in three different practical art, design, and research projects

    Multimodal Approach for Emotion Recognition Using a Formal Computational Model

    Get PDF
    International audience— Emotions play a crucial role in human-computer interaction. They are generally expressed and perceived through multiple modalities such as speech, facial expressions, physiological signals. Indeed, the complexity of emotions makes the acquisition very difficult and makes unimodal systems (i.e., the observation of only one source of emotion) unreliable and often unfeasible in applications of high complexity. Moreover the lack of a standard in human emotions modeling hinders the sharing of affective information between applications. In this paper, we present a multimodal approach for the emotion recognition from many sources of information. This paper aims to provide a multi-modal system for emotion recognition and exchange that will facilitate inter-systems exchanges and improve the credibility of emotional interaction between users and computers. We elaborate a multimodal emotion recognition method from Physiological Data based on signal processing algorithms. Our method permits to recognize emotion composed of several aspects like simulated and masked emotions. This method uses a new multidimensional model to represent emotional states based on an algebraic representation. The experimental results show that the proposed multimodal emotion recognition method improves the recognition rates in comparison to the unimodal approach. Compared to the state of art multimodal techniques, the proposed method gives a good results with 72% of correct
    corecore