18 research outputs found

    Robust Modeling of Epistemic Mental States

    Full text link
    This work identifies and advances some research challenges in the analysis of facial features and their temporal dynamics with epistemic mental states in dyadic conversations. Epistemic states are: Agreement, Concentration, Thoughtful, Certain, and Interest. In this paper, we perform a number of statistical analyses and simulations to identify the relationship between facial features and epistemic states. Non-linear relations are found to be more prevalent, while temporal features derived from original facial features have demonstrated a strong correlation with intensity changes. Then, we propose a novel prediction framework that takes facial features and their nonlinear relation scores as input and predict different epistemic states in videos. The prediction of epistemic states is boosted when the classification of emotion changing regions such as rising, falling, or steady-state are incorporated with the temporal features. The proposed predictive models can predict the epistemic states with significantly improved accuracy: correlation coefficient (CoERR) for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special Issue: Socio-Affective Technologie

    Social Attitude Towards A Conversational Character

    Full text link

    Music emotion identification from lyrics

    Get PDF
    ABSTRACT-Very large online music databases have recently been created by vendors, but they generally lack content-based retrieval methods. One exception is Allmusic.com which offers browsing by musical emotion, using human experts to classify several thousand songs into 183 moods. In this paper, machine learning techniques are used instead of human experts to extract emotions in Music. The classification is based on a psychological model of emotion that is extended to 23 specific emotion categories. Our results for mining the lyrical text of songs for specific emotion are promising, generate classification models that are human-comprehensible, and generate results that correspond to commonsense intuitions about specific emotions. Mining lyrics focused in this paper is one aspect of research which combines different classifiers of musical emotion such as acoustics and lyrical text

    A prototype for a conversational companion for reminiscing about images

    Get PDF
    This work was funded by the COMPANIONS project sponsored by the European Commission as part of the Information Society Technologies (IST) programme under EC grant number IST-FP6-034434. Companions demonstrators can be seen at: http://www.dcs.shef.ac.uk/∼roberta/companions/Web/.This paper describes an initial prototype of the Companions project (www.companions-project.org): the Senior Companion (SC), designed to be a platform to display novel approaches to: (1) The use of Information Extraction (IE) techniques to extract the content of incoming dialogue utterances after an ASR phase. (2) The conversion of the input to RDF form to allow the generation of new facts from existing ones, under the control of a Dialogue Manager (DM), that also has access to stored knowledge and knowledge accessed in real time from the web, all in RDF form. (3) A DM expressed as a stack and network virtual machine that models mixed initiative in dialogue control. (4) A tuned dialogue act detector based on corpus evidence. The prototype platform was evaluated, and we describe this; it is also designed to support more extensive forms of emotion detection carried by both speech and lexical content, as well as extended forms of machine learning. We describe preliminary studies and results for these, in particular a novel approach to enabling reinforcement learning for open dialogue systems through the detection of emotion in the speech signal and its deployment as a form of a learned DM, at a higher level than the DM virtual machine and able to direct the SC’s responses to a more emotionally appropriate part of its repertoire. © 2010 Elsevier Ltd. All rights reserved.peer-reviewe

    Identifying problematic dialogs in a human-computer dialog system

    Get PDF
    In this thesis, we present the development of an automatic system that identifies problematic dialogues in the context of a Human-Computer Dialog System (HCDS). The system we developed is a type of application in Pattern Classification domain. In this work, we propose a probabilistic approach that predicts user satisfaction for each turn of dialogue. To do so, all the features used in our system are automatically extracted from the utterance. A robust and fast machine learning scheme, Hidden Markov Model (HMM) is used to build our desired system. In order to evaluate the system performance, we experimented on two publicly distributed corpora: DARPA Communicator 2000 and 2001. We evaluated the system using a 10-fold stratified cross-validation. Our results show that the system could be used in real life applications
    corecore