8 research outputs found

    An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss

    Full text link
    Affect conveys important implicit information in human communication. Having the capability to correctly express affect during human-machine conversations is one of the major milestones in artificial intelligence. In recent years, extensive research on open-domain neural conversational models has been conducted. However, embedding affect into such models is still under explored. In this paper, we propose an end-to-end affect-rich open-domain neural conversational model that produces responses not only appropriate in syntax and semantics, but also with rich affect. Our model extends the Seq2Seq model and adopts VAD (Valence, Arousal and Dominance) affective notations to embed each word with affects. In addition, our model considers the effect of negators and intensifiers via a novel affective attention mechanism, which biases attention towards affect-rich words in input sentences. Lastly, we train our model with an affect-incorporated objective function to encourage the generation of affect-rich words in the output responses. Evaluations based on both perplexity and human evaluations show that our model outperforms the state-of-the-art baseline model of comparable size in producing natural and affect-rich responses.Comment: AAAI-1

    Target Guided Emotion Aware Chat Machine

    Full text link
    The consistency of a response to a given post at semantic-level and emotional-level is essential for a dialogue system to deliver human-like interactions. However, this challenge is not well addressed in the literature, since most of the approaches neglect the emotional information conveyed by a post while generating responses. This article addresses this problem by proposing a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post and leverage target information for generating more intelligent responses with appropriately expressed emotions. Extensive experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.Comment: To appear on TOIS 202

    Sistemas de di谩logo: una revisi贸n

    Get PDF
    Spoken dialogue systems are computer programs developed to interact with users employing speech in order to provide them with specific automated services. The interaction is carried out by means of dialogue turns, which in many studies available in the literature, researchers aim to make as similar as possible to those between humans in terms of naturalness, intelligence and affective content. In this paper we describe the fundaments of these systems including the main technologies employed for their development. We also present an evolution of this technology and discuss some current applications. Moreover, we discuss development paradigms, including scripting languages and the development of conversational interfaces for mobile apps. The correct modelling of the user is a key aspect of this technology. This is why we also describe affective, personality and contextual models. Finally, we address some current research trends in terms of verbal communication, multimodal interaction and dialogue management.Los sistemas de di谩logo son programas de ordenador desarrollados para interaccionar con los usuarios mediante habla, con la finalidad de proporcionarles servicios automatizados. La interacci贸n se lleva a cabo mediante turnos de un tipo de di谩logo que, en muchos estudios existentes en la literatura, los investigadores intentan que se parezca lo m谩s posible al di谩logo real que se lleva a cabo entre las personas en lo que se refiere a naturalidad, inteligencia y contenido afectivo. En este art铆culo describimos los fundamentos de esta tecnolog铆a, incluyendo las tecnolog铆as b谩sicas que se utilizan para implementar este tipo de sistemas. Tambi茅n presentamos una evoluci贸n de la tecnolog铆a y comentamos algunas aplicaciones actuales. Asimismo, describimos paradigmas de interacci贸n, incluyendo lenguajes de script y desarrollo de interfaces conversacionales para aplicaciones m贸viles. Un aspecto clave de esta tecnolog铆a consiste en realizar un correcto modelado del usuario. Por este motivo, discutimos diversos modelos afectivos, de personalidad y contextuales. Finalmente, comentamos algunas l铆neas de investigaci贸n actuales relacionadas con la comunicaci贸n verbal, interacci贸n multimodal y gesti贸n del di谩logo

    An Approach for Contextual Control in Dialogue Management with Belief State Trend Analysis and Prediction

    Get PDF
    This thesis applies the theory of naturalistic decision making (NDM) in human physcology model for the study of dialogue management system in major approaches from the classical approach based upon finite state machine to most recent approach using partially observable markov decision process (POMDP). While most of the approaches use various techniques to estimate system state, POMDP-based system uses the belief state to make decisions. In addition to the state estimation POMDP provides a mechanism to model the uncertainty and allows error-recovery. However, applying Markovian over the belief-state space in the current POMDP models cause significant loss of valuable information in the dialogue history, leading to untruthful management of user\u27s intention. Also there is a need of adequate interaction with users according to their level of knowledge. To improve the performance of POMDP-based dialogue management, this thesis proposes an enabling method to allow dynamic control of dialogue management. There are three contributions made in order to achieve the dynamism which are as follows: Introduce historical belief information into the POMDP model, analyzing its trend and predicting the user belief states with history information and finally using this derived information to control the system based on the user intention by switching between contextual control modes. Theoretical derivations of proposed work and experiments with simulation provide evidence on dynamic dialogue control of the agent to improve the human-computer interaction using the proposed algorithm

    Predicting user mental states in spoken dialogue systems

    No full text
    <p>Abstract</p> <p>In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.</p

    Emotion-Aware and Human-Like Autonomous Agents

    Get PDF
    In human-computer interaction (HCI), one of the technological goals is to build human-like artificial agents that can think, decide and behave like humans during the interaction. A prime example is a dialogue system, where the agent should converse fluently and coherently with a user and connect with them emotionally. Humanness and emotion-awareness of interactive artificial agents have been shown to improve user experience and help attain application-specific goals more quickly. However, achieving human-likeness in HCI systems is contingent on addressing several philosophical and scientific challenges. In this thesis, I address two such challenges: replicating the human ability to 1) correctly perceive and adopt emotions, and 2) communicate effectively through language. Several research studies in neuroscience, economics, psychology and sociology show that both language and emotional reasoning are essential to the human cognitive deliberation process. These studies establish that any human-like AI should necessarily be equipped with adequate emotional and linguistic cognizance. To this end, I explore the following research directions. - I study how agents can reason emotionally in various human-interactive settings for decision-making. I use Bayesian Affect Control Theory, a probabilistic model of human-human affective interactions, to build a decision-theoretic reasoning algorithm about affect. This approach is validated on several applications: two-person social dilemma games, an assistive healthcare device, and robot navigation. - I develop several techniques to understand and generate emotions/affect in language. The proposed methods include affect-based feature augmentation of neural conversational models, training regularization using affective objectives, and affectively diverse sequential inference. - I devise an active learning technique that elicits user feedback during a conversation. This enables the agent to learn in real time, and to produce natural and coherent language during the interaction. - I explore incremental domain adaptation in language classification and generation models. The proposed method seeks to replicate the human ability to continually learn from new environments without forgetting old experiences
    corecore