4 research outputs found

    Interactive Force Control Based on Multimodal Robot Skin for Physical Human-Robot Collaboration

    Get PDF
    This work proposes and realizes a control architecture that can support the deployment of a large-scale robot skin in a Human-Robot Collaboration scenario. It is shown, how whole-body tactile feedback can extend the capabilities of robots during dynamic interactions by providing information about multiple contacts across the robot\u27s surface. Specifically, an uncalibrated skin system is used to implement stable force control while simultaneously handling the multi-contact interactions of a user. The system formulates control tasks for force control, tactile guidance, collision avoidance, and compliance, and fuses them with a multi-priority redundancy resolution strategy. The approach is evaluated on an omnidirectional mobile-manipulator with dual arms covered with robot skin. Results are assessed under dynamic conditions, showing that multi-modal tactile information enables robust force control while at the same time remaining responsive to a user\u27s interactions

    Human-robot collaborative task planning using anticipatory brain responses

    Get PDF
    Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning

    STOICISM BIBLIOGRAPHY

    No full text
    corecore