2,041 research outputs found

    On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

    Get PDF
    Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces

    Developing multimodal conversational agents: from the use of VoiceXML to Android-based applications

    Get PDF
    Proceedings of: 12th International Conference on Practical Applications of Agents and Multi-Agent Systems, PAAMS 2014, Salamanca, Spain, June 4-6, 2014.The current industrial development of commercial conversational agents and dialog systems deploys robust interfaces in strictly defined application domains. However, commercial systems have not yet adopted new perspectives proposed in the academic settings, which would allow straightforward adaptation of these interfaces. In this paper, we propose two approaches to bridge the gap between the academic and industrial perspectives in order to develop conversational agents using an academic paradigm for dialog management while employing the industrial standards, like the VoiceXML language or the Android OS. Our proposal has been evaluated with the successful development of different spoken and multimodal systems.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485

    Conceptual spatial representations for indoor mobile robots

    Get PDF
    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system

    Proceedings of the 2nd EICS Workshop on Engineering Interactive Computer Systems with SCXML

    Get PDF

    SD-TEAM: Interactive Learning, Self-Evaluation and Multimodal Technologies for Multidomain Spoken Dialog Systems

    Get PDF
    Speech technology currently supports the development of dialogue systems that function in limited domains for which they were trained and in conditions for which they were designed, that is, specific acoustic conditions, speakers etc. The international scientific community has made significant efforts in exploring methods for adaptation to different acoustic contexts, tasks and types of user. However, further work is needed to produce multimodal spoken dialogue systems capable of exploiting interactivity to learn online in order to improve their performance. The goal is to produce flexible and dynamic multimodal, interactive systems based on spoken communication, capable of detecting automatically their operating conditions and especially of learning from user interactions and experience through evaluating their own performance. Such ?living? systems will evolve continuously and without supervision until user satisfaction is achieved. Special attention will be paid to those groups of users for which adaptation and personalisation is essential: amongst others, people with disabilities which lead to communication difficulties (hearing loss, dysfluent speech, ...), mobility problems and non-native users. In this context, the SD-TEAM Project aims to advance the development of technologies for interactive learning and evaluation. In addition, it will develop flexible distributed architectures that allow synergistic interaction between processing modules from a variety of dialogue systems designed for distinct tasks, user groups, acoustic conditions, etc. These technologies will be demonstrated via multimodal dialogue systems to access to services from home and to access to unstructured information, based on the multi-domain systems developed in the previous project TIN2005-08660-C04

    Multimodal Fusion as Communicative Acts during Human-Robot Interaction

    Get PDF
    Research on dialog systems is a very active area in social robotics. During the last two decades, these systems have evolved from those based only on speech recognition and synthesis to the current and modern systems, which include new components and multimodality. By multimodal dialogue we mean the interchange of information among several interlocutors, not just using their voice as the mean of transmission but also all the available channels such as gestures, facial expressions, touch, sounds, etc. These channels add information to the message to be transmitted in every dialogue turn. The dialogue manager (IDiM) is one of the components of the robotic dialog system (RDS) and is in charge of managing the dialogue flow during the conversational turns. In order to do that, it is necessary to coherently treat the inputs and outputs of information that flow by different communication channels: audio, vision, radio frequency, touch, etc. In our approach, this multichannel input of information is temporarily fused into communicative acts (CAs). Each CA groups the information that flows through the different input channels into the same pack, transmitting a unique message or global idea. Therefore, this temporary fusion of information allows the IDiM to abstract from the channels used during the interaction, focusing only on the message, not on the way it is transmitted. This article presents the whole RDS and the description of how the multimodal fusion of information is made as CAs. Finally, several scenarios where the multimodal dialogue is used are presented.Comunidad de Madri

    A Survey of Available Corpora For Building Data-Driven Dialogue Systems: The Journal Version

    Get PDF
    During the past decade, several areas of speech and language understanding have witnessed substantial breakthroughs from the use of data-driven models. In the area of dialogue systems, the trend is less obvious, and most practical systems are still built through significant engineering and expert knowledge. Nevertheless, several recent results suggest that data-driven approaches are feasible and quite promising. To facilitate research in this area, we have carried out a wide survey of publicly available datasets suitable for data-driven learning of dialogue systems. We discuss important characteristics of these datasets, how they can be used to learn diverse dialogue strategies, and their other potential uses. We also examine methods for transfer learning between datasets and the use of external knowledge. Finally, we discuss appropriate choice of evaluation metrics for the learning objective

    Towards Tutoring an Interactive Robot

    Get PDF
    Wrede B, Rohlfing K, Spexard TP, Fritsch J. Towards tutoring an interactive robot. In: Hackel M, ed. Humanoid Robots, Human-like Machines. ARS; 2007: 601-612.Many classical approaches developed so far for learning in a human-robot interaction setting have focussed on rather low level motor learning by imitation. Some doubts, however, have been casted on whether with this approach higher level functioning will be achieved. Higher level processes include, for example, the cognitive capability to assign meaning to actions in order to learn from the tutor. Such capabilities involve that an agent not only needs to be able to mimic the motoric movement of the action performed by the tutor. Rather, it understands the constraints, the means and the goal(s) of an action in the course of its learning process. Further support for this hypothesis comes from parent-infant instructions where it has been observed that parents are very sensitive and adaptive tutors who modify their behavior to the cognitive needs of their infant. Based on these insights, we have started our research agenda on analyzing and modeling learning in a communicative situation by analyzing parent-infant instruction scenarios with automatic methods. Results confirm the well known observation that parents modify their behavior when interacting with their infant. We assume that these modifications do not only serve to keep the infant’s attention but do indeed help the infant to understand the actual goal of an action including relevant information such as constraints and means by enabling it to structure the action into smaller, meaningful chunks. We were able to determine first objective measurements from video as well as audio streams that can serve as cues for this information in order to facilitate learning of actions
    corecore