Human-Agent Interaction Model Learning based on Crowdsourcing

Abstract

Missions involving humans interacting with automated systems become increasingly common. Due to the non-deterministic behavior of the human and possibly high risk of failing due to human factors, such an integrated system should react smartly by adapting its behavior when necessary. A promise avenue to design an efficient interaction-driven system is the mixed-initiative paradigm. In this context, this paper proposes a method to learn the model of a mixed-initiative human-robot mission. The first step to set up a reliable model is to acquire enough data. For this aim a crowdsourcing campaign was conducted and learning algorithms were trained on the collected data in order to model the human-robot mission and to optimize a supervision policy with a Markov Decision Process (MDP). This model takes into account the actions of the human operator during the interaction as well as the state of the robot and the mission. Once such a model has been learned, the supervision strategy can be optimized according to a criterion representing the goal of the mission. In this paper, the supervision strategy concerns the robot’s operating mode. Simulations based on the MDP model show that planning under uncertainty solvers can be used to adapt robot’s mode according to the state of the human-robot system. The optimization of the robot’s operation mode seems to be able to improve the team’s performance. The dataset that comes from crowdsourcing is therefore a material that can be useful for research in human-machine interaction, that is why it has been made available on our website

    Similar works