2,006 research outputs found

    A computational model of human trust in supervisory control of robotic swarms

    Get PDF
    Trust is an important factor in the interaction between humans and automation to mediate the reliance action of human operators. In this work, we study human factors in supervisory control of robotic swarms and develop a computational model of human trust on swarm systems with varied levels of autonomy (LOA). We extend the classic trust theory by adding an intermediate feedback loop to the trust model, which formulates the human trust evolution as a combination of both open-loop trust anticipation and closed-loop trust feedback. A Kalman filter model is implemented to apply the above structure. We conducted a human experiment to collect user data of supervisory control of robotic swarms. Participants were requested to direct the swarm in a simulated environment to finish a foraging task using control systems with varied LOA. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual and autonomous LOA, swarms are controlled by a human or a search algorithm exclusively, while in the MI LOA, the human operator and algorithm collaboratively control the swarm. We train a personalized model for each participant and evaluate the model performance on a separate data set. Evaluation results show that our Kalman model outperforms existing models including inverse reinforcement learning and dynamic Bayesian network methods. In summary, the proposed work is novel in the following aspects: 1) This Kalman estimator is the first to model the complete trust evolution process with both closed-loop feedback and open-loop trust anticipation. 2) The proposed model analyzes time-series data to reveal the influence of events that occur during the course of an interaction; namely, a user’s intervention and report of levels of trust. 3) The proposed model considers the operator’s cognitive time lag between perceiving and processing the system display. 4) The proposed model uses the Kalman filter structure to fuse information from different sources to estimate a human operator's mental states. 5) The proposed model provides a personalized model for each individual

    Trust Repair in Human-Swarm Teams+

    Get PDF
    Swarm robots are coordinated via simple control laws to generate emergent behaviors such as flocking, rendezvous, and deployment. Human-swarm teaming has been widely proposed for scenarios, such as human-supervised teams of unmanned aerial vehicles (UAV) for disaster rescue, UAV and ground vehicle cooperation for building security, and soldier-UAV teaming in combat. Effective cooperation requires an appropriate level of trust, between a human and a swarm. When an UAV swarm is deployed in a real-world environment, its performance is subject to real-world factors, such as system reliability and wind disturbances. Degraded performance of a robot can cause undesired swarm behaviors, decreasing human trust. This loss of trust, in turn, can trigger human intervention in UAVs' task executions, decreasing cooperation effectiveness if inappropriate. Therefore, to promote effective cooperation we propose and test a trust-repairing method (Trust-repair) restoring performance and human trust in the swarm to an appropriate level by correcting undesired swarm behaviors. Faulty swarms caused by both external and internal factors were simulated to evaluate the performance of the Trust-repair algorithm in repairing swarm performance and restoring human trust. Results show that Trust-repair is effective in restoring trust to a level intermediate between normal and faulty conditions

    A Survey of Multi-Agent Human-Robot Interaction Systems

    Full text link
    This article presents a survey of literature in the area of Human-Robot Interaction (HRI), specifically on systems containing more than two agents (i.e., having multiple humans and/or multiple robots). We identify three core aspects of ``Multi-agent" HRI systems that are useful for understanding how these systems differ from dyadic systems and from one another. These are the Team structure, Interaction style among agents, and the system's Computational characteristics. Under these core aspects, we present five attributes of HRI systems, namely Team size, Team composition, Interaction model, Communication modalities, and Robot control. These attributes are used to characterize and distinguish one system from another. We populate resulting categories with examples from recent literature along with a brief discussion of their applications and analyze how these attributes differ from the case of dyadic human-robot systems. We summarize key observations from the current literature, and identify challenges and promising areas for future research in this domain. In order to realize the vision of robots being part of the society and interacting seamlessly with humans, there is a need to expand research on multi-human -- multi-robot systems. Not only do these systems require coordination among several agents, they also involve multi-agent and indirect interactions which are absent from dyadic HRI systems. Adding multiple agents in HRI systems requires advanced interaction schemes, behavior understanding and control methods to allow natural interactions among humans and robots. In addition, research on human behavioral understanding in mixed human-robot teams also requires more attention. This will help formulate and implement effective robot control policies in HRI systems with large numbers of heterogeneous robots and humans; a team composition reflecting many real-world scenarios.Comment: 23 pages, 7 figure

    Mutual shaping in swarm robotics: User studies in fire and rescue, storage organization, and bridge inspection

    Get PDF
    Many real-world applications have been suggested in the swarm robotics literature. However, there is a general lack of understanding of what needs to be done for robot swarms to be useful and trusted by users in reality. This paper aims to investigate user perception of robot swarms in the workplace, and inform design principles for the deployment of future swarms in real-world applications. Three qualitative studies with a total of 37 participants were done across three sectors: fire and rescue, storage organization, and bridge inspection. Each study examined the users’ perceptions using focus groups and interviews. In this paper, we describe our findings regarding: the current processes and tools used in these professions and their main challenges; attitudes toward robot swarms assisting them; and the requirements that would encourage them to use robot swarms. We found that there was a generally positive reaction to robot swarms for information gathering and automation of simple processes. Furthermore, a human in the loop is preferred when it comes to decision making. Recommendations to increase trust and acceptance are related to transparency, accountability, safety, reliability, ease of maintenance, and ease of use. Finally, we found that mutual shaping, a methodology to create a bidirectional relationship between users and technology developers to incorporate societal choices in all stages of research and development, is a valid approach to increase knowledge and acceptance of swarm robotics. This paper contributes to the creation of such a culture of mutual shaping between researchers and users, toward increasing the chances of a successful deployment of robot swarms in the physical realm

    Evaluating the Potential of Drone Swarms in Nonverbal HRI Communication

    Full text link
    Human-to-human communications are enriched with affects and emotions, conveyed, and perceived through both verbal and nonverbal communication. It is our thesis that drone swarms can be used to communicate information enriched with effects via nonverbal channels: guiding, generally interacting with, or warning a human audience via their pattern of motions or behavior. And furthermore that this approach has unique advantages such as flexibility and mobility over other forms of user interface. In this paper, we present a user study to understand how human participants perceived and interpreted swarm behaviors of micro-drone Crazyflie quadcopters flying three different flight formations to bridge the psychological gap between front-end technologies (drones) and the human observers' emotional perceptions. We ask the question whether a human observer would in fact consider a swarm of drones in their immediate vicinity to be nonthreatening enough to be a vehicle for communication, and whether a human would intuit some communication from the swarm behavior, despite the lack of verbal or written language. Our results show that there is statistically significant support for the thesis that a human participant is open to interpreting the motion of drones as having intent and to potentially interpret their motion as communication. This supports the potential use of drone swarms as a communication resource, emergency guidance situations, policing of public events, tour guidance, etc

    A Secure Group Communication Architecture for a Swarm of Autonomous Unmanned Aerial Vehicles

    Get PDF
    This thesis investigates the application of a secure group communication architecture to a swarm of autonomous unmanned aerial vehicles (UAVs). A multicast secure group communication architecture for the low earth orbit (LEO) satellite environment is evaluated to determine if it can be effectively adapted to a swarm of UAVs and provide secure, scalable, and efficient communications. The performance of the proposed security architecture is evaluated with two other commonly used architectures using a discrete event computer simulation developed using MatLab. Performance is evaluated in terms of the scalability and efficiency of the group key distribution and management scheme when the swarm size, swarm mobility, multicast group join and departure rates are varied. The metrics include the total keys distributed over the simulation period, the average number of times an individual UAV must rekey, the average bandwidth used to rekey the swarm, and the average percentage of battery consumed by a UAV to rekey over the simulation period. The proposed security architecture can successfully be applied to a swarm of autonomous UAVs using current technology. The proposed architecture is more efficient and scalable than the other tested and commonly-used architectures. Over all the tested configurations, the proposed architecture distributes 55.2 – 94.8% fewer keys, rekeys 59.0 - 94.9% less often per UAV, uses 55.2 - 87.9% less bandwidth to rekey, and reduces the battery consumption by 16.9 – 85.4%

    Smooth and Resilient Human–Machine Teamwork as an Industry 5.0 Design Challenge

    Get PDF
    Smart machine companions such as artificial intelligence (AI) assistants and collaborative robots are rapidly populating the factory floor. Future factory floor workers will work in teams that include both human co-workers and smart machine actors. The visions of Industry 5.0 describe sustainable, resilient, and human-centered future factories that will require smart and resilient capabilities both from next-generation manufacturing systems and human operators. What kinds of approaches can help design these kinds of resilient human–machine teams and collaborations within them? In this paper, we analyze this design challenge, and we propose basing the design on the joint cognitive systems approach. The established joint cognitive systems approach can be complemented with approaches that support human centricity in the early phases of design, as well as in the development of continuously co-evolving human–machine teams. We propose approaches to observing and analyzing the collaboration in human–machine teams, developing the concept of operations with relevant stakeholders, and including ethical aspects in the design and development. We base our work on the joint cognitive systems approach and propose complementary approaches and methods, namely: actor–network theory, the concept of operations and ethically aware design. We identify their possibilities and challenges in designing and developing smooth human–machine teams for Industry 5.0 manufacturing systems

    Closer Than You Think: The Implications of the Third Offset Strategy for the U.S. Army

    Get PDF
    The Defense Innovation Initiative (DII), begun in November 2014 by former Secretary of Defense Chuck Hagel, is intended to ensure U.S. military superiority throughout the 21st century. The DII seeks broad-based innovation across the spectrum of concepts, research and development, capabilities, leader development, wargaming, and business practices. An essential component of the DII is the Third Offset Strategy—a plan for overcoming (offsetting) adversary parity or advantage, reduced military force structure, and declining technological superiority in an era of great power competition. This study explored the implications for the Army of Third Offset innovations and breakthrough capabilities for the operating environment of 2035-2050. It focused less on debating the merits or feasibility of individual technologies and more on understanding the implications—the second and third order effects on the Army that must be anticipated ahead of the breakthrough.https://press.armywarcollege.edu/monographs/1403/thumbnail.jp
    • …
    corecore