4,867 research outputs found
On combining the facial movements of a talking head
We present work on Obie, an embodied conversational
agent framework. An embodied conversational agent, or
talking head, consists of three main components. The
graphical part consists of a face model and a facial muscle
model. Besides the graphical part, we have implemented
an emotion model and a mapping from emotions to facial
expressions. The animation part of the framework focuses
on the combination of different facial movements
temporally. In this paper we propose a scheme of
combining facial movements on a 3D talking head
CoachAI: A Conversational Agent Assisted Health Coaching Platform
Poor lifestyle represents a health risk factor and is the leading cause of
morbidity and chronic conditions. The impact of poor lifestyle can be
significantly altered by individual behavior change. Although the current shift
in healthcare towards a long lasting modifiable behavior, however, with
increasing caregiver workload and individuals' continuous needs of care, there
is a need to ease caregiver's work while ensuring continuous interaction with
users. This paper describes the design and validation of CoachAI, a
conversational agent assisted health coaching system to support health
intervention delivery to individuals and groups. CoachAI instantiates a text
based healthcare chatbot system that bridges the remote human coach and the
users. This research provides three main contributions to the preventive
healthcare and healthy lifestyle promotion: (1) it presents the conversational
agent to aid the caregiver; (2) it aims to decrease caregiver's workload and
enhance care given to users, by handling (automating) repetitive caregiver
tasks; and (3) it presents a domain independent mobile health conversational
agent for health intervention delivery. We will discuss our approach and
analyze the results of a one month validation study on physical activity,
healthy diet and stress management
The virtual guide: a direction giving embodied conversational agent
We present the Virtual Guide, an embodied conversational agent that can give directions in a 3D virtual environment. We discuss how dialogue management, language generation and the generation of appropriate gestures are carried out in our system
A Knowledge-Grounded Multimodal Search-Based Conversational Agent
Multimodal search-based dialogue is a challenging new task: It extends
visually grounded question answering systems into multi-turn conversations with
access to an external database. We address this new challenge by learning a
neural response generation system from the recently released Multimodal
Dialogue (MMD) dataset (Saha et al., 2017). We introduce a knowledge-grounded
multimodal conversational model where an encoded knowledge base (KB)
representation is appended to the decoder input. Our model substantially
outperforms strong baselines in terms of text-based similarity measures (over 9
BLEU points, 3 of which are solely due to the use of additional information
from the KB
Experimenting with the Gaze of a Conversational Agent
We have carried out a pilot experiment to investigate the effects of different eye gaze behaviors of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of humanlike behavior with two other versions. We called this the optimal version. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to\ud
be typed in by the users of the systems. Despite this restriction we found that participants that conversed with the optimal agent appreciated the agent more than participants that conversed with the other agents. Conversations with the optimal version proceeded more efficiently. Participants needed less time to complete their task
Improving Search through A3C Reinforcement Learning based Conversational Agent
We develop a reinforcement learning based search assistant which can assist
users through a set of actions and sequence of interactions to enable them
realize their intent. Our approach caters to subjective search where the user
is seeking digital assets such as images which is fundamentally different from
the tasks which have objective and limited search modalities. Labeled
conversational data is generally not available in such search tasks and
training the agent through human interactions can be time consuming. We propose
a stochastic virtual user which impersonates a real user and can be used to
sample user behavior efficiently to train the agent which accelerates the
bootstrapping of the agent. We develop A3C algorithm based context preserving
architecture which enables the agent to provide contextual assistance to the
user. We compare the A3C agent with Q-learning and evaluate its performance on
average rewards and state values it obtains with the virtual user in validation
episodes. Our experiments show that the agent learns to achieve higher rewards
and better states.Comment: 17 pages, 7 figure
Agent Simulation to Develop Interactive and User-Centered Conversational Agents
Proceedings of: International Symposium on Distributed Computing and Artificial Intelligence (DCAI 2011). Salamanca, 06-08 April 2011.In this paper, we present a technique for developing user simulators which are able to interact and evaluate conversational agents. Our technique is based on a statistical model that is automatically learned from a dialog corpus. This model is used by the user simulator to provide the following answer taking into account the complete history of the interaction. The main objective of our proposal is not only to evaluate the conversational agent, but also to improve this agent by employing the simulated dialogs to learn a better dialog model. We have applied this technique to design and evaluate a conversational agent which provides academic information in a multi-agent system. The results of the evaluation show that the conversational agent reduces the time needed to fulfill to complete the the dialogs, thereby allowing the conversational agent to tackle new situations and generate new coherent answers for the situations already present in an initial model.Funded by projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-
02/TEC, CAM CONTEXTS (S2009/TIC-1485), and DPS2008-07029-C02-02.Publicad
- …