4,192 research outputs found

    Human-Robot Trust Integrated Task Allocation and Symbolic Motion planning for Heterogeneous Multi-robot Systems

    Full text link
    This paper presents a human-robot trust integrated task allocation and motion planning framework for multi-robot systems (MRS) in performing a set of tasks concurrently. A set of task specifications in parallel are conjuncted with MRS to synthesize a task allocation automaton. Each transition of the task allocation automaton is associated with the total trust value of human in corresponding robots. Here, the human-robot trust model is constructed with a dynamic Bayesian network (DBN) by considering individual robot performance, safety coefficient, human cognitive workload and overall evaluation of task allocation. Hence, a task allocation path with maximum encoded human-robot trust can be searched based on the current trust value of each robot in the task allocation automaton. Symbolic motion planning (SMP) is implemented for each robot after they obtain the sequence of actions. The task allocation path can be intermittently updated with this DBN based trust model. The overall strategy is demonstrated by a simulation with 5 robots and 3 parallel subtask automata

    Multi-Robot Symbolic Task and Motion Planning Leveraging Human Trust Models: Theory and Applications

    Get PDF
    Multi-robot systems (MRS) can accomplish more complex tasks with two or more robots and have produced a broad set of applications. The presence of a human operator in an MRS can guarantee the safety of the task performing, but the human operators can be subject to heavier stress and cognitive workload in collaboration with the MRS than the single robot. It is significant for the MRS to have the provable correct task and motion planning solution for a complex task. That can reduce the human workload during supervising the task and improve the reliability of human-MRS collaboration. This dissertation relies on formal verification to provide the provable-correct solution for the robotic system. One of the challenges in task and motion planning under temporal logic task specifications is developing computationally efficient MRS frameworks. The dissertation first presents an automaton-based task and motion planning framework for MRS to satisfy finite words of linear temporal logic (LTL) task specifications in parallel and concurrently. Furthermore, the dissertation develops a computational trust model to improve the human-MRS collaboration for a motion task. Notably, the current works commonly underemphasize the environmental attributes when investigating the impacting factors of human trust in robots. Our computational trust model builds a linear state-space (LSS) equation to capture the influence of environment attributes on human trust in an MRS. A Bayesian optimization based experimental design (BOED) is proposed to sequentially learn the human-MRS trust model parameters in a data-efficient way. Finally, the dissertation shapes a reward function for the human-MRS collaborated complex task by referring to the above LTL task specification and computational trust model. A Bayesian active reinforcement learning (RL) algorithm is used to concurrently learn the shaped reward function and explore the most trustworthy task and motion planning solution

    Challenges in Collaborative HRI for Remote Robot Teams

    Get PDF
    Collaboration between human supervisors and remote teams of robots is highly challenging, particularly in high-stakes, distant, hazardous locations, such as off-shore energy platforms. In order for these teams of robots to truly be beneficial, they need to be trusted to operate autonomously, performing tasks such as inspection and emergency response, thus reducing the number of personnel placed in harm's way. As remote robots are generally trusted less than robots in close-proximity, we present a solution to instil trust in the operator through a `mediator robot' that can exhibit social skills, alongside sophisticated visualisation techniques. In this position paper, we present general challenges and then take a closer look at one challenge in particular, discussing an initial study, which investigates the relationship between the level of control the supervisor hands over to the mediator robot and how this affects their trust. We show that the supervisor is more likely to have higher trust overall if their initial experience involves handing over control of the emergency situation to the robotic assistant. We discuss this result, here, as well as other challenges and interaction techniques for human-robot collaboration.Comment: 9 pages. Peer reviewed position paper accepted in the CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems, May 2019, Glasgow, U

    Robot Navigation in Unseen Spaces using an Abstract Map

    Full text link
    Human navigation in built environments depends on symbolic spatial information which has unrealised potential to enhance robot navigation capabilities. Information sources such as labels, signs, maps, planners, spoken directions, and navigational gestures communicate a wealth of spatial information to the navigators of built environments; a wealth of information that robots typically ignore. We present a robot navigation system that uses the same symbolic spatial information employed by humans to purposefully navigate in unseen built environments with a level of performance comparable to humans. The navigation system uses a novel data structure called the abstract map to imagine malleable spatial models for unseen spaces from spatial symbols. Sensorimotor perceptions from a robot are then employed to provide purposeful navigation to symbolic goal locations in the unseen environment. We show how a dynamic system can be used to create malleable spatial models for the abstract map, and provide an open source implementation to encourage future work in the area of symbolic navigation. Symbolic navigation performance of humans and a robot is evaluated in a real-world built environment. The paper concludes with a qualitative analysis of human navigation strategies, providing further insights into how the symbolic navigation capabilities of robots in unseen built environments can be improved in the future.Comment: 15 pages, published in IEEE Transactions on Cognitive and Developmental Systems (http://doi.org/10.1109/TCDS.2020.2993855), see https://btalb.github.io/abstract_map/ for access to softwar

    Analysis and Synthesis of Effective Human-Robot Interaction at Varying Levels in Control Hierarchy

    Get PDF
    Robot controller design is usually hierarchical with both high-level task and motion planning and low-level control law design. In the presented works, we investigate methods for low-level and high-level control designs to guarantee joint performance of human-robot interaction (HRI). In the first work, a low-level method using the switched linear quadratic regulator (SLQR), an optimal control policy based on a quadratic cost function, is used. By incorporating measures of robot performance and human workload, it can be determined when to utilize the human operator in a method that improves overall task performance while reducing operator workload. This method is demonstrated via simulation using the complex dynamics of an autonomous underwater vehicle (AUV), showing this method can successfully overcome such scenarios while maintaining reduced workload. An extension of this work to path planning is also presented for the purposes of obstacle avoidance with simulation showing human planning successfully guiding the AUV around obstacles to reach its goals. In the high-level approach, formal methods are applied to a scenario where an operator oversees a group of mobile robots as they navigate an unknown environment. Autonomy in this scenario uses specifications written in linear temporal logic (LTL) to conduct symbolic motion planning in a guaranteed safe, though very conservative, approach. A human operator, using gathered environmental data, is able to produce a more efficient path. To aid in task decomposition and real-time switching, a dynamic human trust model is used. Simulations are given showing the successful implementation of this method

    Towards adaptive multi-robot systems: self-organization and self-adaptation

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The development of complex systems ensembles that operate in uncertain environments is a major challenge. The reason for this is that system designers are not able to fully specify the system during specification and development and before it is being deployed. Natural swarm systems enjoy similar characteristics, yet, being self-adaptive and being able to self-organize, these systems show beneficial emergent behaviour. Similar concepts can be extremely helpful for artificial systems, especially when it comes to multi-robot scenarios, which require such solution in order to be applicable to highly uncertain real world application. In this article, we present a comprehensive overview over state-of-the-art solutions in emergent systems, self-organization, self-adaptation, and robotics. We discuss these approaches in the light of a framework for multi-robot systems and identify similarities, differences missing links and open gaps that have to be addressed in order to make this framework possible
    corecore