27,996 research outputs found

    Vers l'évaluation de systèmes de dialogue homme-machine : de l'oral au multimodal

    Get PDF
    International audienceEvaluating human-machine dialogue systems is not so efficient, objective, and consensual than evaluating other natural language processing systems. Oral and multimodal dialogue systems are still working within reduced applicative domains. Comparative and normative evaluations are then difficult. Moreover, the continuous technological progress makes obsolete and numerous the evaluating paradigms. Some solutions are still to be identified to improve existing methods and to allow a more automatic diagnosis of systems. The aim of this paper is to provide a set of remarks dealing with the evaluation of multimodal spoken language dialogue systems. Some extensions of existing paradigms are presented, in particular DQR/DCR, considering that some paradigms fit better multimodal issues than others. Some conclusions and perspectives are then drawn on the future of the evaluation of human-machine dialogue systems.L'évaluation pour le dialogue homme-machine ne se caractérise pas par l'efficacité, l'objectivité et le consensus que l'on observe dans d'autres domaines du traitement automatique des langues. Les systèmes de dialogue oraux et multimodaux restent cantonnés à des domaines applicatifs restreints, ce qui rend difficiles les évaluations comparatives ou normées. De plus, les avancées technologiques constantes rendent vite obsolètes les paradigmes d'évaluation et ont pour conséquence une multiplication de ceux-ci. Des solutions restent ainsi à trouver pour améliorer les méthodes existantes et permettre des diagnostics plus automatisés des systèmes. Cet article se veut un ensemble de réflexions autour de l'évaluation de la multimodalité dans les systèmes à forte composante linguistique. Des extensions des paradigmes existants sont proposées, en particulier DQR/DCR, sachant que certains sont mieux adaptés que d'autres au dialogue multimodal. Des conclusions et perspectives sont tirées sur l'avenir de l'évaluation pour le dialogue homme-machine

    SD-TEAM: Interactive Learning, Self-Evaluation and Multimodal Technologies for Multidomain Spoken Dialog Systems

    Get PDF
    Speech technology currently supports the development of dialogue systems that function in limited domains for which they were trained and in conditions for which they were designed, that is, specific acoustic conditions, speakers etc. The international scientific community has made significant efforts in exploring methods for adaptation to different acoustic contexts, tasks and types of user. However, further work is needed to produce multimodal spoken dialogue systems capable of exploiting interactivity to learn online in order to improve their performance. The goal is to produce flexible and dynamic multimodal, interactive systems based on spoken communication, capable of detecting automatically their operating conditions and especially of learning from user interactions and experience through evaluating their own performance. Such ?living? systems will evolve continuously and without supervision until user satisfaction is achieved. Special attention will be paid to those groups of users for which adaptation and personalisation is essential: amongst others, people with disabilities which lead to communication difficulties (hearing loss, dysfluent speech, ...), mobility problems and non-native users. In this context, the SD-TEAM Project aims to advance the development of technologies for interactive learning and evaluation. In addition, it will develop flexible distributed architectures that allow synergistic interaction between processing modules from a variety of dialogue systems designed for distinct tasks, user groups, acoustic conditions, etc. These technologies will be demonstrated via multimodal dialogue systems to access to services from home and to access to unstructured information, based on the multi-domain systems developed in the previous project TIN2005-08660-C04

    Multimodal agent interfaces and system architectures for health and fitness companions

    Get PDF
    Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. In this paper we present how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. In particular, we focus on different forms of multimodality and system architectures for such interfaces

    Improving Context Modelling in Multimodal Dialogue Generation

    Full text link
    In this work, we investigate the task of textual response generation in a multimodal task-oriented dialogue system. Our work is based on the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017) in the fashion domain. We introduce a multimodal extension to the Hierarchical Recurrent Encoder-Decoder (HRED) model and show that this extension outperforms strong baselines in terms of text-based similarity metrics. We also showcase the shortcomings of current vision and language models by performing an error analysis on our system's output

    A Knowledge-Grounded Multimodal Search-Based Conversational Agent

    Full text link
    Multimodal search-based dialogue is a challenging new task: It extends visually grounded question answering systems into multi-turn conversations with access to an external database. We address this new challenge by learning a neural response generation system from the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017). We introduce a knowledge-grounded multimodal conversational model where an encoded knowledge base (KB) representation is appended to the decoder input. Our model substantially outperforms strong baselines in terms of text-based similarity measures (over 9 BLEU points, 3 of which are solely due to the use of additional information from the KB

    Measuring stress and cognitive load effects on the perceived quality of a multimodal dialogue system

    Get PDF
    In this paper we present the results of a pilot study investigating the impact of stress and cognitive load on the perceived interaction quality of a multimodal dialogue system for crisis management. Four test subjects interacted with the system in four differently configured trials aiming to induce low/high levels of stress and cognitive load. To measure the level of stress and cognitive load physiological sensors and subjective ratings were collected. After each trial the subjects filled in an evaluation questionnaire regarding the system interaction quality. In the end we conducted an in-depth interview with each subject. The trials were recorded with a webcam to facilitate the behaviour analysis. Results showed that both factors have an influence on the way subjects perceived the interaction quality, whereas the cognitive load seems to have a higher impact. Further quantitative experiments are needed in order to validate the results and quantify the weight of each factor. \u
    corecore