15,062 research outputs found

    Human-Robot Trust Integrated Task Allocation and Symbolic Motion planning for Heterogeneous Multi-robot Systems

    Full text link
    This paper presents a human-robot trust integrated task allocation and motion planning framework for multi-robot systems (MRS) in performing a set of tasks concurrently. A set of task specifications in parallel are conjuncted with MRS to synthesize a task allocation automaton. Each transition of the task allocation automaton is associated with the total trust value of human in corresponding robots. Here, the human-robot trust model is constructed with a dynamic Bayesian network (DBN) by considering individual robot performance, safety coefficient, human cognitive workload and overall evaluation of task allocation. Hence, a task allocation path with maximum encoded human-robot trust can be searched based on the current trust value of each robot in the task allocation automaton. Symbolic motion planning (SMP) is implemented for each robot after they obtain the sequence of actions. The task allocation path can be intermittently updated with this DBN based trust model. The overall strategy is demonstrated by a simulation with 5 robots and 3 parallel subtask automata

    Trust-Based Control of (Semi)Autonomous Mobile Robotic Systems

    Get PDF
    Despite great achievements made in (semi)autonomous robotic systems, human participa-tion is still an essential part, especially for decision-making about the autonomy allocation of robots in complex and uncertain environments. However, human decisions may not be optimal due to limited cognitive capacities and subjective human factors. In human-robot interaction (HRI), trust is a major factor that determines humans use of autonomy. Over/under trust may lead to dispro-portionate autonomy allocation, resulting in decreased task performance and/or increased human workload. In this work, we develop automated decision-making aids utilizing computational trust models to help human operators achieve a more effective and unbiased allocation. Our proposed decision aids resemble the way that humans make an autonomy allocation decision, however, are unbiased and aim to reduce human workload, improve the overall performance, and result in higher acceptance by a human. We consider two types of autonomy control schemes for (semi)autonomous mobile robotic systems. The first type is a two-level control scheme which includes switches between either manual or autonomous control modes. For this type, we propose automated decision aids via a computational trust and self-confidence model. We provide analytical tools to investigate the steady-state effects of the proposed autonomy allocation scheme on robot performance and human workload. We also develop an autonomous decision pattern correction algorithm using a nonlinear model predictive control to help the human gradually adapt to a better allocation pattern. The second type is a mixed-initiative bilateral teleoperation control scheme which requires mixing of autonomous and manual control. For this type, we utilize computational two-way trust models. Here, mixed-initiative is enabled by scaling the manual and autonomous control inputs with a function of computational human-to-robot trust. The haptic force feedback cue sent by the robot is dynamically scaled with a function of computational robot-to-human trust to reduce humans physical workload. Using the proposed control schemes, our human-in-the-loop tests show that the trust-based automated decision aids generally improve the overall robot performance and reduce the operator workload compared to a manual allocation scheme. The proposed decision aids are also generally preferred and trusted by the participants. Finally, the trust-based control schemes are extended to the single-operator-multi-robot applications. A theoretical control framework is developed for these applications and the stability and convergence issues under the switching scheme between different robots are addressed via passivity based measures

    Suitable task allocation in intelligent systems for assistive environments

    Get PDF
    The growing need of technological assistance to provide support to people with special needs demands for systems more and more efficient and with better performances. With this aim, this work tries to advance in a multirobot platform that allows the coordinated control of different agents and other elements in the environment to achieve an autonomous behavior based on the user’s needs or will. Therefore, this environment is structured according to the potentiality of each agent and elements of this environment and of the dynamic context, to generate the adequate actuation plans and the coordination of their execution.Peer ReviewedPostprint (author's final draft

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table

    The Assistive Multi-Armed Bandit

    Full text link
    Learning preferences implicit in the choices humans make is a well studied problem in both economics and computer science. However, most work makes the assumption that humans are acting (noisily) optimally with respect to their preferences. Such approaches can fail when people are themselves learning about what they want. In this work, we introduce the assistive multi-armed bandit, where a robot assists a human playing a bandit task to maximize cumulative reward. In this problem, the human does not know the reward function but can learn it through the rewards received from arm pulls; the robot only observes which arms the human pulls but not the reward associated with each pull. We offer sufficient and necessary conditions for successfully assisting the human in this framework. Surprisingly, better human performance in isolation does not necessarily lead to better performance when assisted by the robot: a human policy can do better by effectively communicating its observed rewards to the robot. We conduct proof-of-concept experiments that support these results. We see this work as contributing towards a theory behind algorithms for human-robot interaction.Comment: Accepted to HRI 201
    corecore