5 research outputs found

    ECA gesture strategies for robust SLDSs

    Get PDF
    This paper explores the use of embodied conversational agents (ECAs) to improve interaction with spoken language dialogue systems (SLDSs). For this purpose we have identified typical interaction problems with SLDSs and associated with each of them a particular ECA gesture or behaviour. User tests were carried out dividing the test users into two groups, each facing a different interaction metaphor (one with an ECA in the interface, and the other implemented only with voice). Our results suggest user frustration is lower when an ECA is present in the interface, and the dialogue flows more smoothly, partly due to the fact that users are better able to tell when they are expected to speak and whether the system has heard and understood. The users’ overall perceptions regarding the system were also affected, and interaction seems to be more enjoyable with an ECA than without it

    Evaluation of ECA Gesture strategies for robust Human-Computer Interaction

    Full text link
    Embodied Conversational Agents (ECAs) offer us the possibility to design pleasant and efficient human-machine interaction. In this paper we present an evaluation scheme to compare dialogue-based speaker authentication and information retrieval systems with and without ECAs on the interface. We used gestures and other visual cues to improve fluency and robustness of interaction with these systems. Our tests results suggest that when an ECA is present users perceive fewer system errors, their frustration levels are lower, turn-changing goes more smoothly, the interaction experience is more enjoyable, and system capabilities are generally perceived more positively than when no ECA is present. However, the ECA seems to intensify the users' privacy concerns

    AGORA, multilingual multiplatform architecture for the development of natural language voice services

    Get PDF
    The natural language spoken dialogue system AGORA has been developed using a Collaborative Dialogue model with Mixed Initiative and Computational Linguistic models and experiences. Thanks to these technologies, the system is highly flexible and it doesn’t need keywords or directed menus. In this demo you will see the multilingual ability and the proactivity possibilities of the system. You will also observe a multiservice system and a vocal platform with the last advances in data collection of expert subdialogues
    corecore