2,857 research outputs found

    Emotions, behaviour and belief regulation in an intelligent guide with attitude

    Get PDF
    Abstract unavailable please refer to PD

    深層学習に基づく感情会話分析に関する研究

    Get PDF
    Owning the capability to express specific emotions by a chatbot during a conversation is one of the key parts of artificial intelligence, which has an intuitive and quantifiable impact on the improvement of chatbot’s usability and user satisfaction. Enabling machines to emotion recognition in conversation is challenging, mainly because the information in human dialogue innately conveys emotions by long-term experience, abundant knowledge, context, and the intricate patterns between the affective states. Recently, many studies on neural emotional conversational models have been conducted. However, enabling the chatbot to control what kind of emotion to respond to upon its own characters in conversation is still underexplored. At this stage, people are no longer satisfied with using a dialogue system to solve specific tasks, and are more eager to achieve spiritual communication. In the chat process, if the robot can perceive the user's emotions and can accurately process them, it can greatly enrich the content of the dialogue and make the user empathize. In the process of emotional dialogue, our ultimate goal is to make the machine understand human emotions and give matching responses. Based on these two points, this thesis explores and in-depth emotion recognition in conversation task and emotional dialogue generation task. In the past few years, although considerable progress has been made in emotional research in dialogue, there are still some difficulties and challenges due to the complex nature of human emotions. The key contributions in this thesis are summarized as below: (1) Researchers have paid more attention to enhancing natural language models with knowledge graphs these days, since knowledge graph has gained a lot of systematic knowledge. A large number of studies had shown that the introduction of external commonsense knowledge is very helpful to improve the characteristic information. We address the task of emotion recognition in conversations using external knowledge to enhance semantics. In this work, we employ an external knowledge graph ATOMIC to extract the knowledge sources. We proposed KES model, a new framework that incorporates different elements of external knowledge and conversational semantic role labeling, where build upon them to learn interactions between interlocutors participating in a conversation. The conversation is a sequence of coherent and orderly discourses. For neural networks, the capture of long-range context information is a weakness. We adopt Transformer a structure composed of self-attention and feed forward neural network, instead of the traditional RNN model, aiming at capturing remote context information. We design a self-attention layer specialized for enhanced semantic text features with external commonsense knowledge. Then, two different networks composed of LSTM are responsible for tracking individual internal state and context external state. In addition, the proposed model has experimented on three datasets in emotion detection in conversation. The experimental results show that our model outperforms the state-of-the-art approaches on most of the tested datasets. (2) We proposed an emotional dialogue model based on Seq2Seq, which is improved from three aspects: model input, encoder structure, and decoder structure, so that the model can generate responses with rich emotions, diversity, and context. In terms of model input, emotional information and location information are added based on word vectors. In terms of the encoder, the proposed model first encodes the current input and sentence sentiment to generate a semantic vector, and additionally encodes the context and sentence sentiment to generate a context vector, adding contextual information while ensuring the independence of the current input. On the decoder side, attention is used to calculate the weights of the two semantic vectors separately and then decode, to fully integrate the local emotional semantic information and the global emotional semantic information. We used seven objective evaluation indicators to evaluate the model's generation results, context similarity, response diversity, and emotional response. Experimental results show that the model can generate diverse responses with rich sentiment, contextual associations

    Mood inference machine : framework to infer affective phenomena in ROODA virtual learning environment

    Get PDF
    This article presents a mechanism to infer mood states, aiming to provide virtual learning environments (VLE) with a tool able to recognize the student’s motivation. The inference model has as its parameters personality traits, motivational factors obtained through behavioral standards and the affective subjectivity identified in texts made available in the communication functionalities of the VLE. In the inference machine, such variables are treated under probability reasoning, more precisely by bayesian networks

    A feasibility study of psychological strengths and well-being assessment in individuals living with recurrent depression

    Get PDF
    Current conceptualizations of mental illness focus on assessing psychopathology. A balanced approach would assess strengths that individuals bring to coping with illness. This study measures psychological strengths in individuals with recurrent depression, their coping strategies, and their perceptions of the usefulness of strengths assessment as a component of psychological assessment. Individuals (N = 112) with recurrent depression completed an online questionnaire measuring several psychological strengths, including gratitude, forgiveness, spirituality, and hope. Participants also described their use of coping strategies and their reaction to the utility of the two-continua model of mental health. A subset (n = 10) completed a follow-up telephone interview. Higher levels of gratitude, self-forgiveness, hope, and spirituality and lower levels of optimism were indicative of higher life satisfaction. Self-forgiveness, spirituality, and gratitude were predictors of happiness. Higher levels of hope and self-forgiveness predicted positive affect whereas lower levels of self-forgiveness predicted negative affect. Participants reported using a range of coping resources and indicated that they valued strengths assessment, perceiving the two-continua model of mental health as empowering. The researcher discusses implications for clinical practice

    Real-time expressive internet communications

    Get PDF
    This research work "Real-time Expressive Internet Communications" focuses on two subjects: One is the investigation of methods of automatic emotion detection and visualisation under real-time Internet communication environment, the other is the analysis of the influences of presenting visualised emotion expressivei mages to Internet users. To detect emotion within Internet communication, the emotion communication process over the Internet needs to be examined. An emotion momentum theory was developed to illustrate the emotion communication process over the Internet communication. It is argued in this theory that an Internet user is within a certain emotion state, the emotion state is changeable by internal and external stimulus (e.g. a received chat message) and time; stimulus duration and stimulus intensity are the major factors influencing the emotion state. The emotion momentum theory divides the emotions expressed in Internet communication into three dimensions: emotion category, intensity and duration. The emotion momentum theory was implemented within a prototype emotion extraction engine. The emotion extraction engine can analyse input text in an Internet chat environment, detect and extract the emotion being communicated, and deliver the parameters to invoke an appropriate expressive image on screen to the every communicating user's display. A set of experiments were carried out to test the speed and the accuracy of the emotion extraction engine. The results of the experiments demonstrated an acceptable performance of the emotion extraction engine. The next step of this study was to design and implement an expressive image generator that generates expressive images from a single neutral facial image. Generated facial images are classified into six categories, and for each category, three different intensities were achieved. Users need to define only six control points and three control shapes to synthesise all the expressive images and a set of experiments were carried out to test the quality of the synthesised images. The experiment results demonstrated an acceptable recognition rate of the generated facial expression images. With the emotion extraction engine and the expressive image generator,a test platform was created to evaluate the influences of emotion visualisation in the Internet communication context. The results of a series of experiments demonstratedthat emotion visualisation can enhancethe users' perceived performance and their satisfaction with the interfaces. The contributions to knowledge fall into four main areas; firstly, the emotion momentum theory that is proposed to illustrate the emotion communication process over the Internet; secondly, the innovations built into an emotion extraction engine, which senses emotional feelings from textual messages input by Internet users; thirdly, the innovations built into the expressive image generator, which synthesises facial expressions using a fast approach with a user friendly interface; and fourthly, the identification of the influence that the visualisation of emotion has on human computer interaction

    Ego-state Estimation from Short Texts Based on Sentence Distributed Representation

    Get PDF
    Human personality multilaterally consists of complex elements. Egogram is a method to classify personalities into patterns according to combinations of five levels of ego-states. With the recent development of Social Networking Services (SNS), a number of studies have attempted to judge personality from statements appearing on various social networking sites. However, there are several problems associated with personality judgment based on the superficial information found in such statements. For example, one's personality is not always reflected in every statement that one makes, and statements are influenced by a personality that tends to change over time. It is also important to collect sufficient amounts of statement data including the results of personality judgments. In this paper, to produce an automatic egogram judgment, we focused on the short texts found on certain SNS sites, especially microblogs. We represented Twitter user comments with a distributed representation (sentence vector) in pre-training and then sought to create a model to estimate the ego-state levels of each Twitter user using a deep neural network. Experimental results showed that our proposed method estimated ego-states with higher accuracy than the baseline method based on bag of words. To investigate changes of personality over time, we analyzed how the match rates of the estimation results changed before/after the egogram judgment. Moreover, we confirmed that the personality pattern classification was improved by adding a feature expressing the degree of formality of the sentence

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
    corecore