269 research outputs found

    Domain and subtask-adaptive conversational agents to provide an enhanced human-agent interaction

    Get PDF
    Proceedings of: 12th International Conference on Practical Applications of Agents and Multi-Agent Systems, PAAMS 2014, Salamanca, Spain, June 4-6, 2014One of the most demanding tasks when developing conversational agents consists of designing the dialog manager, which decides the next system response considering the user's actions and the dialog history. A previously developed statistical dialog management technique is adapted in this work to reduce the effort and time required to design the dialog manager. This technique allows not only an easy adaptation to new domains, but also to deal with the different subtasks by means of specific dialog models adapted to each dialog objective in the domain of a multiagent system. The practical application of the proposed technique to develop a conversational agent providing railway information shows that the use of these specific dialog models increases the quality and number of successful interactions with the agent in comparison with developing a single dialog model for the complete domain.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485

    Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems

    Get PDF
    In this paper we present a novel framework for the integration of visual sensor networks and speech-based interfaces. Our proposal follows the standard reference architecture in fusion systems (JDL), and combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the framework integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, enhanced conversational agents facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows modeling the user conversational behavior, which is learned from an initial corpus and improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.This work was supported in part by Projects MEyC TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS S2009/TIC-1485Publicad

    A multi-agent architecture to combine heterogeneous inputs in multimodal interaction systems

    Get PDF
    Actas de: CAEPIA 2013, Congreso federado Agentes y Sistemas Multi-Agente: de la Teoría a la Práctica (ASMas). Madrid, 17-20 Septiembre 2013.In this paper we present a multi-agent architecture for the integration of visual sensor networks and speech-based interfaces. The proposed architecture combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the architecture integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, the proposed architecture incorporates enhanced conversational agents to facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows to model the user conversational behavior, which is learned from an initial corpus and posteriorly improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485).Publicad

    A Neural Network Approach to Intention Modeling forUser-Adapted Conversational Agents

    Get PDF
    Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment andhuman-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of theuser’s intention during the dialogue and uses this prediction todynamically adapt the dialoguemodel of the system taking intoconsideration the user’s needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue systemthat facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in thesuccess of the interaction, the information and services provided, and the quality perceived by the users

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Evaluation of a hierarchical reinforcement learning spoken dialogue system

    Get PDF
    We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ‘Precision-Recall’. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems

    Bringing AI into the Classroom: Designing Smart Personal Assistants as Learning Tutors

    Get PDF
    Despite a growing body of research about the design and use of Smart Personal Assistants like Amazon’s Alexa, little is known about their ability to be learning tutors. New emerging ecosystems empower educators to develop their own agents without deep technical knowledge. The objective of our study is to find and validate a general set of design principles that educators can use to design their own agents. Using a design science research approach, we present requirements from students and educators as well as IS and education theory. Next, we formulate design principles and evaluate them with a focus group before we instantiate our artifact in an everyday learning environment. The findings of this short paper indicate that the design principles and corresponding artifact are able to significantly improve learning outcomes. The completed work aims to develop a nascent design theory for designing Smart Personal Assistants as learning tutors
    • …
    corecore