285 research outputs found
A computational model of human trust in supervisory control of robotic swarms
Trust is an important factor in the interaction between humans and automation to mediate the reliance action of human operators. In this work, we study human factors in supervisory control of robotic swarms and develop a computational model of human trust on swarm systems with varied levels of autonomy (LOA). We extend the classic trust theory by adding an intermediate feedback loop to the trust model, which formulates the human trust evolution as a combination of both open-loop trust anticipation and closed-loop trust feedback. A Kalman filter model is implemented to apply the above structure. We conducted a human experiment to collect user data of supervisory control of robotic swarms. Participants were requested to direct the swarm in a simulated environment to finish a foraging task using control systems with varied LOA. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual and autonomous LOA, swarms are controlled by a human or a search algorithm exclusively, while in the MI LOA, the human operator and algorithm collaboratively control the swarm. We train a personalized model for each participant and evaluate the model performance on a separate data set. Evaluation results show that our Kalman model outperforms existing models including inverse reinforcement learning and dynamic Bayesian network methods.
In summary, the proposed work is novel in the following aspects:
1) This Kalman estimator is the first to model the complete trust evolution process with both closed-loop feedback and open-loop trust anticipation. 2) The proposed model analyzes time-series data to reveal the influence of events that occur during the course of an interaction; namely, a user’s intervention and report of levels of trust. 3) The proposed model considers the operator’s cognitive time lag between perceiving and processing the system display. 4) The proposed model uses the Kalman filter structure to fuse information from different sources to estimate a human operator's mental states. 5) The proposed model provides a personalized model for each individual
A Survey of Multi-Agent Human-Robot Interaction Systems
This article presents a survey of literature in the area of Human-Robot
Interaction (HRI), specifically on systems containing more than two agents
(i.e., having multiple humans and/or multiple robots). We identify three core
aspects of ``Multi-agent" HRI systems that are useful for understanding how
these systems differ from dyadic systems and from one another. These are the
Team structure, Interaction style among agents, and the system's Computational
characteristics. Under these core aspects, we present five attributes of HRI
systems, namely Team size, Team composition, Interaction model, Communication
modalities, and Robot control. These attributes are used to characterize and
distinguish one system from another. We populate resulting categories with
examples from recent literature along with a brief discussion of their
applications and analyze how these attributes differ from the case of dyadic
human-robot systems. We summarize key observations from the current literature,
and identify challenges and promising areas for future research in this domain.
In order to realize the vision of robots being part of the society and
interacting seamlessly with humans, there is a need to expand research on
multi-human -- multi-robot systems. Not only do these systems require
coordination among several agents, they also involve multi-agent and indirect
interactions which are absent from dyadic HRI systems. Adding multiple agents
in HRI systems requires advanced interaction schemes, behavior understanding
and control methods to allow natural interactions among humans and robots. In
addition, research on human behavioral understanding in mixed human-robot teams
also requires more attention. This will help formulate and implement effective
robot control policies in HRI systems with large numbers of heterogeneous
robots and humans; a team composition reflecting many real-world scenarios.Comment: 23 pages, 7 figure
A Planning Pipeline for Large Multi-Agent Missions
In complex multi-agent applications, human operators are often tasked with planning and managing large heterogeneous teams of humans and autonomous vehicles. Although the use of these autonomous vehicles broadens the scope of meaningful applications, many of their systems remain unintuitive and difficult to master for human operators whose expertise lies in the application domain and not at the platform level. Current research focuses on the development of individual capabilities necessary to plan multi-agent missions of this scope, placing little emphasis on the integration of these components in to a full pipeline. The work presented in this paper presents a complete and user-agnostic planning pipeline for large multiagent missions known as the HOLII GRAILLE. The system takes a holistic approach to mission planning by integrating capabilities in human machine interaction, flight path generation, and validation and verification. Components modules of the pipeline are explored on an individual level, as well as their integration into a whole system. Lastly, implications for future mission planning are discussed
Industry Led Use-Case Development for Human-Swarm Operations
In the domain of unmanned vehicles, autonomous robotic swarms promise to
deliver increased efficiency and collective autonomy. How these swarms will
operate in the future, and what communication requirements and operational
boundaries will arise are yet to be sufficiently defined. A workshop was
conducted with 11 professional unmanned-vehicle operators and designers with
the objective of identifying use-cases for developing and testing robotic
swarms. Three scenarios were defined by experts and were then compiled to
produce a single use case outlining the scenario, objectives, agents,
communication requirements and stages of operation when collaborating with
highly autonomous swarms. Our compiled use case is intended for researchers,
designers, and manufacturers alike to test and tailor their design pipeline to
accommodate for some of the key issues in human-swarm ininteraction. Examples
of application include informing simulation development, forming the basis of
further design workshops, and identifying trust issues that may arise between
human operators and the swarm.Comment: Accepted at AAAI 2022 Spring Symposium Series (Putting AI in the
Critical Loop: Assured Trust and Autonomy in Human-Machine Teams
Analysis and Synthesis of Effective Human-Robot Interaction at Varying Levels in Control Hierarchy
Robot controller design is usually hierarchical with both high-level task and motion planning and low-level control law design. In the presented works, we investigate methods for low-level and high-level control designs to guarantee joint performance of human-robot interaction (HRI). In the first work, a low-level method using the switched linear quadratic regulator (SLQR), an optimal control policy based on a quadratic cost function, is used. By incorporating measures of robot performance and human workload, it can be determined when to utilize the human operator in a method that improves overall task performance while reducing operator workload. This method is demonstrated via simulation using the complex dynamics of an autonomous underwater vehicle (AUV), showing this method can successfully overcome such scenarios while maintaining reduced workload. An extension of this work to path planning is also presented for the purposes of obstacle avoidance with simulation showing human planning successfully guiding the AUV around obstacles to reach its goals. In the high-level approach, formal methods are applied to a scenario where an operator oversees a group of mobile robots as they navigate an unknown environment. Autonomy in this scenario uses specifications written in linear temporal logic (LTL) to conduct symbolic motion planning in a guaranteed safe, though very conservative, approach. A human operator, using gathered environmental data, is able to produce a more efficient path. To aid in task decomposition and real-time switching, a dynamic human trust model is used. Simulations are given showing the successful implementation of this method
Trust Repair in Human-Swarm Teams+
Swarm robots are coordinated via simple control laws to generate emergent behaviors such as flocking, rendezvous, and deployment. Human-swarm teaming has been widely proposed for scenarios, such as human-supervised teams of unmanned aerial vehicles (UAV) for disaster rescue, UAV and ground vehicle cooperation for building security, and soldier-UAV teaming in combat. Effective cooperation requires an appropriate level of trust, between a human and a swarm. When an UAV swarm is deployed in a real-world environment, its performance is subject to real-world factors, such as system reliability and wind disturbances. Degraded performance of a robot can cause undesired swarm behaviors, decreasing human trust. This loss of trust, in turn, can trigger human intervention in UAVs' task executions, decreasing cooperation effectiveness if inappropriate. Therefore, to promote effective cooperation we propose and test a trust-repairing method (Trust-repair) restoring performance and human trust in the swarm to an appropriate level by correcting undesired swarm behaviors. Faulty swarms caused by both external and internal factors were simulated to evaluate the performance of the Trust-repair algorithm in repairing swarm performance and restoring human trust. Results show that Trust-repair is effective in restoring trust to a level intermediate between normal and faulty conditions
Recommended from our members
Double elevation: Autonomous weapons and the search for an irreducible law of war
What should be the role of law in response to the spread of artificial intelligence in war? Fuelled by both public and private investment, military technology is accelerating towards increasingly autonomous weapons, as well as the merging of humans and machines. Contrary to much of the contemporary debate, this is not a paradigm change; it is the intensification of a central feature in the relationship between technology and war: Double elevation, above one's enemy and above oneself. Elevation above one's enemy aspires to spatial, moral, and civilizational distance. Elevation above oneself reflects a belief in rational improvement that sees humanity as the cause of inhumanity and de-humanization as our best chance for humanization. The distance of double elevation is served by the mechanization of judgement. To the extent that judgement is seen as reducible to algorithm, law becomes the handmaiden of mechanization. In response, neither a focus on questions of compatibility nor a call for a 'ban on killer robots' help in articulating a meaningful role for law. Instead, I argue that we should turn to a long-standing philosophical critique of artificial intelligence, which highlights not the threat of omniscience, but that of impoverished intelligence. Therefore, if there is to be a meaningful role for law in resisting double elevation, it should be law encompassing subjectivity, emotion and imagination, law irreducible to algorithm, a law of war that appreciates situated judgement in the wielding of violence for the collective
Industry Led Use-Case Development for Human-Swarm Operations
In the domain of unmanned vehicles, autonomous robotic swarms promise to deliver increased efficiency and collective autonomy. How these swarms will operate in the future, and what communication requirements and operational boundaries will arise are yet to be sufficiently defined. A workshop was conducted with 11 professional unmanned-vehicle operators and designers with the objective of identifying use-cases for developing and testing robotic swarms. Three scenarios were defined by experts and were then compiled to produce a single use case outlining the scenario, objectives, agents, communication requirements and stages of operation when collaborating with highly autonomous swarms. Our compiled use case is intended for researchers, designers, and manufacturers alike to test and tailor their design pipeline to accommodate for some of the key issues in human-swarm ininteraction. Examples of application include informing simulation development, forming the basis of further design workshops, and identifying trust issues that may arise between human operators and the swarm
Brain Computer Interfaces for the Control of Robotic Swarms
abstract: A robotic swarm can be defined as a large group of inexpensive, interchangeable
robots with limited sensing and/or actuating capabilities that cooperate (explicitly
or implicitly) based on local communications and sensing in order to complete a
mission. Its inherent redundancy provides flexibility and robustness to failures and
environmental disturbances which guarantee the proper completion of the required
task. At the same time, human intuition and cognition can prove very useful in
extreme situations where a fast and reliable solution is needed. This idea led to the
creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate
the human element into the control of robotic swarms for increased robustness and
reliability. The aim of the present work is to extend the current state-of-the-art in HSI
by applying ideas and principles from the field of Brain-Computer Interfaces (BCI),
which has proven to be very useful for people with motor disabilities. At first, a
preliminary investigation about the connection of brain activity and the observation
of swarm collective behaviors is conducted. After showing that such a connection
may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors.
The system is based on the combination of motor imagery and the input from a game
controller, while its feasibility is proven through an extensive experimental process.
Finally, speech imagery is proposed as an alternative mental task for BCI applications.
This is done through a series of rigorous experiments and appropriate data analysis.
This work suggests that the integration of BCI principles in HSI applications can be
successful and it can potentially lead to systems that are more intuitive for the users
than the current state-of-the-art. At the same time, it motivates further research in
the area and sets the stepping stones for the potential development of the field of
Brain-Swarm Interfaces (BSI).Dissertation/ThesisMasters Thesis Mechanical Engineering 201
Foundations of Trusted Autonomy
Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie
- …