128 research outputs found

    Multilingual Chatbots to Collect Patient-Reported Outcomes

    Get PDF
    With spoken language interfaces, chatbots, and enablers, the conversational intelligence became an emerging field of research in man-machine interfaces in several target domains. In this paper, we introduce the multilingual conversational chatbot platform that integrates Open Health Connect platform and mHealth application together with multimodal services in order to deliver advanced 3D embodied conversational agents. The platform enables novel human-machine interaction with the cancer survivors in six different languages. The platform also integrates patients’ reported information as patients gather health data into digital clinical records. Further, the conversational agents have the potential to play a significant role in healthcare, from assistants during clinical consultations, to supporting positive behavior changes, or as assistants in living environments helping with daily tasks and activities

    Lip syncing method for realistic expressive three-dimensional face model

    Get PDF
    Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. Thus, this study proposes a lip syncing method of realistic expressive 3D face model. Animated lips require a 3D face model capable of representing the movement of face muscles during speech and a method to produce the correct lip shape at the correct time. The 3D face model is designed based on MPEG-4 facial animation standard to support lip syncing that is aligned with input audio file. It deforms using Raised Cosine Deformation function that is grafted onto the input facial geometry. This study also proposes a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. Finally, this study integrates emotions by considering both Ekman model and Plutchik’s wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language to produce realistic 3D face model. The experimental results show that the proposed model can generate visually satisfactory animations with Mean Square Error of 0.0020 for neutral, 0.0024 for happy expression, 0.0020 for angry expression, 0.0030 for fear expression, 0.0026 for surprise expression, 0.0010 for disgust expression, and 0.0030 for sad expression

    Lip syncing method for realistic expressive 3D face model

    Get PDF
    Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human, social and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level of realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. This research proposed a lip syncing method of realistic expressive 3D face model. Animated lips requires a 3D face model capable of representing the myriad shapes the human face experiences during speech and a method to produce the correct lip shape at the correct time. The paper presented a 3D face model designed to support lip syncing that align with input audio file. It deforms using Raised Cosine Deformation (RCD) function that is grafted onto the input facial geometry. The face model was based on MPEG-4 Facial Animation (FA) Standard. This paper proposed a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. The proposed research integrated emotions by the consideration of Ekman model and Plutchik’s wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language (EEMML) to produce realistic 3D face model. © 2017 Springer Science+Business Media New Yor

    Dramatic Expression in Opera, and Its Implications for Conversational Agents

    Get PDF
    This article has discussed principles, techniques, and methods of dramatic portrayal in opera, and their application to the development of embodied conversational agents. Investigations such as this complement studies of natural human behavior, and offer insights as to how to make such behavior understandable and interesting when adapted for use by embodied conversational agents. However, one should use caution in applying such lessons. The unique characteristics of computer-based media are still being identified and explored. In any case, one must always be careful about applying principles blindly to any artistic form. Such principles are post-hoc analysis of the intuitive skill of great artists; this was as true in Aristotle's day as it is today. We should not let structural principles stand in the way of injecting creativity into the design of ECAs. Opera at its best possesses an element of magic that is difficult to describe, much less analytically reconstruct. We can only hope to achieve a similar result with conversational agents

    The SEMAINE API : a component integration framework for a naturally interacting and emotionally competent embodied conversational agent

    Get PDF
    The present thesis addresses the topic area of Embodied Conversational Agents (ECAs) with capabilities for natural interaction with a human user and emotional competence with respect to the perception and generation of emotional expressivity. The focus is on the technological underpinnings that facilitate the implementation of a real-time system with these capabilities, built from re-usable components. The thesis comprises three main contributions. First, it describes a new component integration framework, the SEMAINE API, which makes it easy to build emotion-oriented systems from components which interact with one another using standard and pre-standard XML representations. Second, it presents a prepare-and-trigger system architecture which substantially speeds up the time to animation for system utterances that can be pre-planned. Third, it reports on the W3C Emotion Markup Language, an upcoming web standard for representing emotions in technological systems. We assess critical aspects of system performance, showing that the framework provides a good basis for implementing real-time interactive ECA systems, and illustrate by means of three examples that the SEMAINE API makes it is easy to build new emotion-oriented systems from new and existing components.Die vorliegende Dissertation behandelt das Thema der virtuellen Agenten mit Fähigkeiten zur natürlichen Benutzer-Interaktion sowie emotionaler Kompetenz bzgl. der Wahrnehmung und Generierung emotionalen Ausdrucks. Der Schwerpunkt der Arbeit liegt auf den technologischen Grundlagen für die Implementierung eines echtzeitfähigen Systems mit diesen Fähigkeiten, das aus wiederverwendbaren Komponenten erstellt werden kann. Die Arbeit umfasst drei Kernaspekte. Zum Einen beschreibt sie ein neues Framework zur Komponenten-Integration, die SEMAINE API: Diese erleichtert die Erstellung von Emotions-orientierten Systemen aus Komponenten, die untereinander mittels Standard- oder Prä-Standard-Repräsentationen kommunizieren. Zweitens wird eine Systemarchitektur vorgestellt, welche Vorbereitung und Auslösung von Systemverhalten entkoppelt und so zu einer substanziellen Beschleunigung der Generierungszeit führt, wenn Systemäußerungen im Voraus geplant werden können. Drittens beschreibt die Arbeit die W3C Emotion Markup Language, einen werdenden Web-Standard zur Repräsentation von Emotionen in technologischen Systemen. Es werden kritische Aspekte der Systemperformanz untersucht, wodurch gezeigt wird, dass das Framework eine gute Basis für die Implementierung echtzeitfähiger interaktiver Agentensysteme darstellt. Anhand von drei Beispielen wird illustriert, dass mit der SEMAINE API leicht neue Emotions-orientierte Systeme aus neuen und existierenden Komponenten erstellt werden können

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Towards an architectural framework for intelligent virtual agents using probabilistic programming

    Full text link
    We present a new framework called KorraAI for conceiving and building embodied conversational agents (ECAs). Our framework models ECAs' behavior considering contextual information, for example, about environment and interaction time, and uncertain information provided by the human interaction partner. Moreover, agents built with KorraAI can show proactive behavior, as they can initiate interactions with human partners. For these purposes, KorraAI exploits probabilistic programming. Probabilistic models in KorraAI are used to model its behavior and interactions with the user. They enable adaptation to the user's preferences and a certain degree of indeterminism in the ECAs to achieve more natural behavior. Human-like internal states, such as moods, preferences, and emotions (e.g., surprise), can be modeled in KorraAI with distributions and Bayesian networks. These models can evolve over time, even without interaction with the user. ECA models are implemented as plugins and share a common interface. This enables ECA designers to focus more on the character they are modeling and less on the technical details, as well as to store and exchange ECA models. Several applications of KorraAI ECAs are possible, such as virtual sales agents, customer service agents, virtual companions, entertainers, or tutors

    A semantic memory bank assisted by an embodied conversational agents for mobile devices

    Get PDF
    Alzheimer’s disease is a type of dementia that causes memory loss and interferes with intellectual abilities seriously. It has no current cure and therapeutic efficiency of current medication is limited. However, there is evidence that non-pharmacological treatments could be useful to stimulate cognitive abilities. In the last few year, several studies have focused on describing and under- standing how Virtual Coaches (VC) could be key drivers for health promotion in home care settings. The use of VC gains an augmented attention in the considerations of medical innovations. In this paper, we propose an approach that exploits semantic technologies and Embodied Conversational Agents to help patients training cognitive abilities using mobile devices. In this work, semantic technologies are used to provide knowledge about the memory of a specific person, who exploits the structured data stored in a linked data repository and take advantage of the flexibility provided by ontologies to define search domains and expand the agent’s capabilities. Our Memory Bank Embodied Conversational Agent (MBECA) is used to interact with the patient and ease the interaction with new devices. The framework is oriented to Alzheimer’s patients, caregivers, and therapists
    corecore