5 research outputs found

    Characterization of Indicators for Adaptive Human-Swarm Teaming

    Get PDF
    Swarm systems consist of large numbers of agents that collaborate autonomously. With an appropriate level of human control, swarm systems could be applied in a variety of contexts ranging from urban search and rescue situations to cyber defence. However, the successful deployment of the swarm in such applications is conditioned by the effective coupling between human and swarm. While adaptive autonomy promises to provide enhanced performance in human-machine interaction, distinct factors must be considered for its implementation within human-swarm interaction. This paper reviews the multidisciplinary literature on different aspects contributing to the facilitation of adaptive autonomy in human-swarm interaction. Specifically, five aspects that are necessary for an adaptive agent to operate properly are considered and discussed, including mission objectives, interaction, mission complexity, automation levels, and human states. We distill the corresponding indicators in each of the five aspects, and propose a framework, named MICAH (i.e., Mission-Interaction-Complexity-Automation-Human), which maps the primitive state indicators needed for adaptive human-swarm teaming.</p

    Human-robot collaborative task planning using anticipatory brain responses

    Get PDF
    Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning

    Comparison of Machine Learning Techniques on Trust Detection Using EEG

    Get PDF
    Trust is a pillar of society and is a fundamental aspect in every relationship. With the use of automated agents in todays workforce exponentially growing, being able to actively monitor an individuals trust level that is working with the automation is becoming increasingly more important. Humans often have miscalibrated trust in automation and therefore are prone to making costly mistakes. Since deciding to trust or distrust has been shown to correlate with specific brain activity, it is thought that there are EEG signals which are associated with this decision. Using both a human-human trust and a human-machine trust EEG dataset from past research, within-participant, cross-participant, and cross-task cross-participant trust detection was attempted. Six machine learning models, logistic regression, LDA, QDA, SVM, RFC, and an ANN, were used for each experiment. Multiple within-participant models had balanced accuracies greater than 70.00 , but no cross-participant or cross-participant cross task models achieved this
    corecore