5,365 research outputs found
Towards responsive Sensitive Artificial Listeners
This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness
Agents for educational games and simulations
This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
A good gesture: exploring nonverbal communication for robust SLDSs
Actas de las IV Jornadas de Tecnología del Habla (JTH 2006)In this paper we propose a research framework to explore the possibilities that state-of-the-art embodied conversational agents (ECAs) technology can offer to overcome typical
robustness problems in spoken language dialogue systems (SLDSs), such as error detection and recovery, changes of turn and clarification requests, that occur in many human-machine dialogue situations in real applications. Our goal is to study the effects of nonverbal communication throughout the dialogue, and find out to what extent ECAs can help overcome user frustration in critical situations. In particular, we have created a gestural repertoire that we will test and continue to refine and expand, to fit as closely as possible the users’ expectations and intuitions, and to favour a more efficient and
pleasant dialogue flow for the users. We also describe the test environment we have designed, simulating a realistic mobile application, as well as the evaluation methodology for the assessment, in forthcoming tests, of the potential benefits of
adding nonverbal communication in complex dialogue situations.This work has been possible thanks to the support grant received from project TIC2003-09068-C02-02 of the Spanish Plan Nacional de I+D
Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects
These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects
Recommended from our members
Effective Tutoring with Empathic Embodied Conversational Agents
This thesis examines the prospect of using empathy in an Embodied Tutoring System (ETS) that guides students through an online quiz (by providing feedback on student answers and responding to self-reported student emotion). The ETS seeks to imitate human behaviours successfully used in one-to-one human tutorial interactions. The main hypothesis is that the interaction with an empathic ETS results in greater learning gains than a neutral ETS, primarily by encouraging positive and reducing negative student emotions using empathic feedback.
In a preparatory study we investigated different strategies for expressing emotion by the ETS. We established that a multimodal strategy achieves the best results regarding how accurately human participants can recognise the emotions. This approach was used in developing the feedback strategy for our empathic ETS.
The preparatory study was followed by two studies in which we compared a neutral with an empathic ETS. The ETS in the second of these studies was developed using results from the first of these studies. In both studies, we found no statistically significant difference in learning gains between the neutral and empathic ETS. However, we did discover a number of interactions between the ETS system, learning gains and, in particular 1) student scores on an empathic tendency test and 2) student ability. We also analysed the subjective responses and the relation between self-reported emotions during the quiz and student learning gains.
Based on our studies in a formal class room setting, we assess the prospects of using empathic agents in a classroom setting and describe a number of requirements for their effective use
Designing multimodal interaction for the visually impaired
Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access.
This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination.
Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices.
Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance.
In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction.
The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks
Continuous Interaction with a Virtual Human
Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access
- …