3,873 research outputs found

    Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks

    Full text link
    A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects.Comment: 7 pages, 8 figure

    Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems

    Get PDF
    In this paper we present a novel framework for the integration of visual sensor networks and speech-based interfaces. Our proposal follows the standard reference architecture in fusion systems (JDL), and combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the framework integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, enhanced conversational agents facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows modeling the user conversational behavior, which is learned from an initial corpus and improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.This work was supported in part by Projects MEyC TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS S2009/TIC-1485Publicad

    Motivations, Classification and Model Trial of Conversational Agents for Insurance Companies

    Full text link
    Advances in artificial intelligence have renewed interest in conversational agents. So-called chatbots have reached maturity for industrial applications. German insurance companies are interested in improving their customer service and digitizing their business processes. In this work we investigate the potential use of conversational agents in insurance companies by determining which classes of agents are of interest to insurance companies, finding relevant use cases and requirements, and developing a prototype for an exemplary insurance scenario. Based on this approach, we derive key findings for conversational agent implementation in insurance companies.Comment: 12 pages, 6 figure, accepted for presentation at The International Conference on Agents and Artificial Intelligence 2019 (ICAART 2019

    A multi-agent architecture to combine heterogeneous inputs in multimodal interaction systems

    Get PDF
    Actas de: CAEPIA 2013, Congreso federado Agentes y Sistemas Multi-Agente: de la Teoría a la Práctica (ASMas). Madrid, 17-20 Septiembre 2013.In this paper we present a multi-agent architecture for the integration of visual sensor networks and speech-based interfaces. The proposed architecture combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the architecture integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, the proposed architecture incorporates enhanced conversational agents to facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows to model the user conversational behavior, which is learned from an initial corpus and posteriorly improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485).Publicad

    A Proposal for Processing and Fusioning Multiple Information Sources in Multimodal Dialog Systems

    Get PDF
    Proceedings of: PAAMS 2014 International Workshops. Agent-based Approaches for the Transportation Modelling and Optimisation (AATMO' 14 ) & Intelligent Systems for Context-based Information Fusion (ISCIF' 14). Salamanca, Spain, June 4-6, 2014.Multimodal dialog systems can be defined as computer systems that process two or more user input modes and combine them with multimedia system output. This paper is focused on the multimodal input, providing a proposal to process and fusion the multiple input modalities in the dialog manager of the system, so that a single combined input is used to select the next system action. We describe an application of our technique to build multimodal systems that process user's spoken utterances, tactile and keyboard inputs, and information related to the context of the interaction. This information is divided in our proposal into external and internal context, user's internal, represented in our contribution by the detection of their intention during the dialog and their emotional state.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485)

    Developing multimodal conversational agents: from the use of VoiceXML to Android-based applications

    Get PDF
    Proceedings of: 12th International Conference on Practical Applications of Agents and Multi-Agent Systems, PAAMS 2014, Salamanca, Spain, June 4-6, 2014.The current industrial development of commercial conversational agents and dialog systems deploys robust interfaces in strictly defined application domains. However, commercial systems have not yet adopted new perspectives proposed in the academic settings, which would allow straightforward adaptation of these interfaces. In this paper, we propose two approaches to bridge the gap between the academic and industrial perspectives in order to develop conversational agents using an academic paradigm for dialog management while employing the industrial standards, like the VoiceXML language or the Android OS. Our proposal has been evaluated with the successful development of different spoken and multimodal systems.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485

    Building multi-domain conversational systems from single domain resources

    Get PDF
    Current Advances In The Development Of Mobile And Smart Devices Have Generated A Growing Demand For Natural Human-Machine Interaction And Favored The Intelligent Assistant Metaphor, In Which A Single Interface Gives Access To A Wide Range Of Functionalities And Services. Conversational Systems Constitute An Important Enabling Technology In This Paradigm. However, They Are Usually Defined To Interact In Semantic-Restricted Domains In Which Users Are Offered A Limited Number Of Options And Functionalities. The Design Of Multi-Domain Systems Implies That A Single Conversational System Is Able To Assist The User In A Variety Of Tasks. In This Paper We Propose An Architecture For The Development Of Multi-Domain Conversational Systems That Allows: (1) Integrating Available Multi And Single Domain Speech Recognition And Understanding Modules, (2) Combining Available System In The Different Domains Implied So That It Is Not Necessary To Generate New Expensive Resources For The Multi-Domain System, (3) Achieving Better Domain Recognition Rates To Select The Appropriate Interaction Management Strategies. We Have Evaluated Our Proposal Combining Three Systems In Different Domains To Show That The Proposed Architecture Can Satisfactory Deal With Multi-Domain Dialogs. (C) 2017 Elsevier B.V. All Rights Reserved.Work partially supported by projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02
    • …
    corecore