8 research outputs found

    Empowering human-AI teams via Intentional Behavioral Synchrony

    Get PDF
    As Artificial Intelligence (AI) proliferates across various sectors such as healthcare, transportation, energy, and military applications, the collaboration between human-AI teams is becoming increasingly critical. Understanding the interrelationships between system elements - humans and AI - is vital to achieving the best outcomes within individual team members' capabilities. This is also crucial in designing better AI algorithms and finding favored scenarios for joint AI-human missions that capitalize on the unique capabilities of both elements. In this conceptual study, we introduce Intentional Behavioral Synchrony (IBS) as a synchronization mechanism between humans and AI to set up a trusting relationship without compromising mission goals. IBS aims to create a sense of similarity between AI decisions and human expectations, drawing on psychological concepts that can be integrated into AI algorithms. We also discuss the potential of using multimodal fusion to set up a feedback loop between the two partners. Our aim with this work is to start a research trend centered on exploring innovative ways of deploying synchrony between teams of non-human members. Our goal is to foster a better sense of collaboration and trust between humans and AI, resulting in more effective joint missions

    Human–agent team dynamics: a review and future research opportunities

    Get PDF
    Humans teaming with intelligent autonomous agents is becoming indispensable in work environments. However, human–agent teams pose significant challenges, as team dynamics are complex arising from the task and social aspects of human–agent interactions. To improve our understanding of human–agent team dynamics, in this article, we conduct a systematic literature review. Drawing on Mathieu et al.’s (2019) teamwork model developed for all-human teams, we map the landscape of research to human–agent team dynamics, including structural features, compositional features, mediating mechanisms, and the interplay of the above features and mechanisms. We reveal that the development of human–agent team dynamics is still nascent, with a particular focus on information sharing, trust development, agents’ human likeness behaviors, shared cognitions, situation awareness, and function allocation. Gaps remain in many areas of team dynamics, such as team processes, adaptability, shared leadership, and team diversity. We offer various interdisciplinary pathways to advance research on human–agent teams

    Foundations of Trusted Autonomy

    Get PDF
    Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie

    Human Autonomy Teaming - The Teamwork of the Future

    Get PDF
    Dies ist ein Herausgeberwerk.Der Zusammenarbeit von Mensch und Technik kommt angesichts technologischer Fortschritte eine immer größere Bedeutung zu. Das Human Autonomy Teaming (HAT) birgt in diesem Zusammenhang als neue Form der Teamarbeit zwischen menschlichen Teammitgliedern und technischen Einheiten, sogenannten autonomen Agenten, ein großes Potenzial. Der Mensch kooperiert mit seinem technischen Teammitglied und wird von diesem bei gemeinsamen Aufgaben im Team unterstützt. Beide Akteure ergänzen sich mit ihren individuellen Stärken gegenseitig im Team. In diesem Buch sind aktuelle Themen im Rahmen des HAT für Forscher/innen und Praktiker/innen übersichtlich aufbereitet, um gemeinsam zur erfolgreichen Umsetzung autonomer Agenten als Teammitglied des Menschen im Sinne eines HAT beitragen zu können. In Kapitel 1 wird in das Thema eingeleitet, grundlegende Definitionen und Modelle für das gesamte Werk vorgestellt sowie die Potentiale des HAT aufgezeigt. Kapitel 2 thematisiert menschliche und technische Anforderungen für erfolgreiches HAT, bevor in Kapitel 3 näher auf die Zusammenarbeit zwischen Mensch und Technik und die damit einhergehenden Stärken und Schwächen eingegangen wird. Kapitel 4 liefert Einblicke in aktuelle Anwendungsgebiete des HAT. Abschließend werden in Kapitel 5 zukünftige Entwicklungen des HAT diskutiert. As a result of technological advances, collaboration between humans and technology is becoming increasingly important. In this context, Human Autonomy Teaming (HAT), as a new form of teamwork between humans and technology, so-called autonomous agents, has great potential and offers many possibilities in research and application. Both team members complement each other with their individual strengths striving to achieve a common goal. In this book, current topics within the framework of the HAT are clearly presented for researchers and practitioners in order to be able to jointly contribute to the successful implementation of autonomous agents as team members in the sense of HAT. Chapter 1 introduces the topic, basic definitions and models for the entire work, and shows the potential of HAT. Chapter 2 deals with human and technological requirements for successful HAT, before chapter 3 goes into more detail on the cooperation between humans and technology and the associated strengths and weaknesses. Chapter 4 provides insights into current fields of application of HAT. Finally, in Chapter 5, future developments of HAT are discussed

    Context Awareness in Swarm Systems

    Full text link
    Recent swarms of Uncrewed Systems (UxS) require substantial human input to support their operation. The little 'intelligence' on these platforms limits their potential value and increases their overall cost. Artificial Intelligence (AI) solutions are needed to allow a single human to guide swarms of larger sizes. Shepherding is a bio-inspired swarm guidance approach with one or a few sheepdogs guiding a larger number of sheep. By designing AI-agents playing the role of sheepdogs, humans can guide the swarm by using these AI agents in the same manner that a farmer uses biological sheepdogs to muster sheep. A context-aware AI-sheepdog offers human operators a smarter command and control system. It overcomes the current limiting assumption in the literature of swarm homogeneity to manage heterogeneous swarms and allows the AI agents to better team with human operators. This thesis aims to demonstrate the use of an ontology-guided architecture to deliver enhanced contextual awareness for swarm control agents. The proposed architecture increases the contextual awareness of AI-sheepdogs to improve swarm guidance and control, enabling individual and collective UxS to characterise and respond to ambiguous swarm behavioural patterns. The architecture, associated methods, and algorithms advance the swarm literature by allowing improved contextual awareness to guide heterogeneous swarms. Metrics and methods are developed to identify the sources of influence in the swarm, recognise and discriminate the behavioural traits of heterogeneous influencing agents, and design AI algorithms to recognise activities and behaviours. The proposed contributions will enable the next generation of UxS with higher levels of autonomy to generate more effective Human-Swarm Teams (HSTs)

    Human-Autonomy Teaming And Agent Transparency

    No full text
    We developed the user interfaces for two Human-Robot Interaction (HRI) tasking envir onments: dismounted infantry interacting with a ground robot (Autonomous Squad Member) and human interaction with an intelligent agent to manage a team of heterogeneous robotic vehicles (IMPACT). These user interfaces were developed based on the Situation awareness-based Agent Transparency (SAT) model. User testing showed that as agent transparency increased, so did overall human-agent team performance. Participants were able to calibrate their trust in the agent more appropriately as agent transparency increased

    Human-Autonomy Teaming And Agent Transparency

    No full text
    We developed the user interfaces for two HRI tasking environments based on the Situation awareness-based Agent Transparency (SAT) model: dismounted infantry interacting with a ground robot (Autonomous Squad Member) and human interacting with an intelligent agent to manage a team of heterogeneous robotic vehicles (IMPACT). User testing showed that as agent transparency increased, so did human operator performance and trust calibration effectiveness. The expanded SAT model, which includes Teamwork Transparency, is also briefly described
    corecore