4,452 research outputs found

    The Impact of Coordination Quality on Coordination Dynamics and Team Performance: When Humans Team with Autonomy

    Get PDF
    abstract: This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that adding a synthetic agent as a team member may lead teams to demonstrate different coordination patterns resulting in differences in team cognition and ultimately team effectiveness. The theory of Interactive Team Cognition (ITC) emphasizes the importance of team interaction behaviors over the collection of individual knowledge. In this dissertation, Nonlinear Dynamical Methods (NDMs) were applied to capture characteristics of overall team coordination and communication behaviors. The findings supported the hypothesis that coordination stability is related to team performance in a nonlinear manner with optimal performance associated with moderate stability coupled with flexibility. Thus, we need to build mechanisms in HATs to demonstrate moderately stable and flexible coordination behavior to achieve team-level goals under routine and novel task conditions.Dissertation/ThesisDoctoral Dissertation Engineering 201

    Smarter Tech ↔ Better Teams:A Dual Imperative

    Get PDF

    When AI joins the Team: A Literature Review on Intragroup Processes and their Effect on Team Performance in Team-AI Collaboration

    Get PDF
    Although systems based on artificial intelligence (AI) can collaborate with humans on various complex tasks, little is known about how AI systems can successfully collaborate with human teams (team-AI collaboration). Team performance research states that team composition and intragroup processes are important predictors of team performance. However, it is not clear how intragroup processes differ in team-AI collaboration from human teams and if this is reflected in differences in team performance. To answer these questions, we synthesize evidence from 18 empirical articles. Results indicate that intragroup processes like communication and coordination are less effective in team-AI collaboration. Moreover, whether team cognition and trust are higher in team-AI collaboration compared to human teams is not clear, since studies find conflicting results. Likewise, the results on team performance differences between team-AI collaboration and human teams are inconsistent. With this article we offer a foundation for future research on team-AI collaboration

    USMC VERTICAL TAKEOFF AND LANDING AIRCRAFT: HUMAN–MACHINE TEAMING FOR CONTROLLING UNMANNED AERIAL SYSTEMS

    Get PDF
    The United States Marine Corps (USMC) is investing in aviation technologies through its Vertical Takeoff and Landing (VTOL) aircraft program that will enhance mission superiority and warfare dominance against both conventional and asymmetric threats. One of the USMC program initiatives is to launch unmanned aerial systems (UAS) from future human-piloted VTOL aircraft for collaborative hybrid (manned and unmanned) missions. This hybrid VTOL-UAS capability will support USMC intelligence, surveillance, and reconnaissance (ISR), electronic warfare (EW), communications relay, and kinetic strike air to ground missions. This capstone project studied the complex human-machine interactions involved in the future hybrid VTOL-UAS capability through model-based systems engineering analysis, coactive design interdependence analysis, and modeling and simulation experimentation. The capstone focused on a strike coordination and reconnaissance (SCAR) mission involving a manned VTOL platform, a VTOL-launched UAS, and a ground control station (GCS). The project produced system requirements, a system architecture, a conceptual design, and insights into the human-machine teaming aspects of this future VTOL capability.Major, United States ArmyMajor, United States ArmyMajor, United States ArmyMajor, United States ArmyMajor, United States ArmyApproved for public release. Distribution is unlimited

    Agent Teaming Situation Awareness (ATSA): A Situation Awareness Framework for Human-AI Teaming

    Full text link
    The rapid advancements in artificial intelligence (AI) have led to a growing trend of human-AI teaming (HAT) in various fields. As machines continue to evolve from mere automation to a state of autonomy, they are increasingly exhibiting unexpected behaviors and human-like cognitive/intelligent capabilities, including situation awareness (SA). This shift has the potential to enhance the performance of mixed human-AI teams over all-human teams, underscoring the need for a better understanding of the dynamic SA interactions between humans and machines. To this end, we provide a review of leading SA theoretical models and a new framework for SA in the HAT context based on the key features and processes of HAT. The Agent Teaming Situation Awareness (ATSA) framework unifies human and AI behavior, and involves bidirectional, and dynamic interaction. The framework is based on the individual and team SA models and elaborates on the cognitive mechanisms for modeling HAT. Similar perceptual cycles are adopted for the individual (including both human and AI) and the whole team, which is tailored to the unique requirements of the HAT context. ATSA emphasizes cohesive and effective HAT through structures and components, including teaming understanding, teaming control, and the world, as well as adhesive transactive part. We further propose several future research directions to expand on the distinctive contributions of ATSA and address the specific and pressing next steps.Comment: 52 pages,5 figures, 1 tabl

    Structuring AI Teammate Communication: An Exploration of AI\u27s Communication Strategies in Human-AI Teams

    Get PDF
    In the past decades, artificial intelligence (AI) has been implemented in various domains to facilitate humans in their work, such as healthcare and the automotive industry. Such application of AI has led to increasing attention on human-AI teaming, where AI closely collaborates with humans as a teammate. AI as a teammate is expected to have the ability to coordinate with humans by sharing task-related information, predicting other teammates’ behaviors, and progressing team tasks accordingly. To complete these team activities effectively, AI teammates must communicate with humans, such as sharing updates and checking team progress. Even though communication is a core element of teamwork that helps to achieve effective coordination, how to design and structure human-AI communication in teaming environments still remains unclear. Given the context-dependent characteristics of communication, research on human-AI teaming communication needs to narrow down and focus on specific communication elements/components, such as the proactivity of communication and communication content. In doing so, this dissertation explores how AI teammates’ communication should be structured by modifying communication components through three studies, each of which details a critical component of effective AI communication: (1) communication proactivity, (2) communication content (explanation), and (3) communication approach (verbal vs. non-verbal). These studies provide insights into how AI teammates’ communication ii can be integrated into teamwork and how to design AI teammate communication in human-AI teaming. Study 1 explores an important communication element, communication proactivity, and its impact on team processes and team performance. Specifically, communication proactivity in this dissertation refers to whether an AI teammate proactively communicates with human teammates, i.e., proactively pushing information to human teammates. Experimental analysis shows that AI teammates’ proactive communication plays a crucial role in impacting human perceptions, such as perceived teammate performance and satisfaction with the teammate. Importantly, teams with a non-proactive communication AI teammate increase team performance more than teams with a proactive communication AI as the human and the AI collaborate more. This study identifies the positive impact of AI being proactive in communication at the initial stage of task coordination, as well as the potential need for AI’s flexibility in their communication proactivity (i.e., once human and AI teammates’ coordination pattern forms, AI can be non-proactive in communication). Study 2 examines communication content by focusing on AI’s explanation and its impact on human perceptions in teaming environments. Results indicate that AI’s explanation, as part of communication content, does not always positively impact human trust in human-AI teaming. Instead, the impact of AI’s explanations on human perceptions depends on specific collaboration scenarios. Specifically, AI’s explanations facilitate trust in the AI teammate when explaining why AI disobeys humans’ orders, but hinder trust when explaining why AI lies to humans. In addition, AI giving an explanation of why they ignored the human teammate’s injury was perceived to be more effective than AI not providing such an explanation. The findings emphasize the context-dependent characteristic of AI’s communication content with a focus on AI’s explanation of their actions. iii Study 3 investigates AI’s communication approach, which was manipulated as verbal vs. non-verbal communication. Results indicate that AI teammates’ verbal/nonverbal communication does not impact human trust in the AI teammate, but facilitates the maintenance of humans’ situation awareness in task coordination. In addition, AI with non-verbal communication is perceived as having lower communication quality and lower performance. Importantly, AI with non-verbal communication has better team performance in human-human-AI teams than human-AI-AI teams, whereas AI with verbal communication has better team performance in human-AI-AI teams than human-human-AI teams. These three studies together address multiple research gaps in human-AI team communication and provide a holistic view of the design and structure of AI’s communication by examining three specific aspects of communication in human-AI teaming. In addition, each study in this dissertation proposes practical design implications on AI’s communication in human-AI teams, which will assist AI designers and developers to create better AI teammates that facilitate humans in teaming environments

    A Technical Roadmap for Autonomy for Marine Future Vertical Lift (FVL)

    Get PDF
    NPS NRP Executive SummaryThe Marines desire to leverage automation in their next Future Vertical Lift (FVL) platform, meaning they must define the human-FVL teaming interactions. The FVL will operate in a wide spectrum of flight regimes, from remotely piloted, to fully manned, to mostly automatic, and in combinations of the above. This broadened operating approach necessitates that understanding the various human machine teaming interdependent interactions across this diverse operating spectrum be completely delineated. NPS is well positioned to assist. Three approaches are considered: Use Co-active Design, since it is a rigorous engineering process that captures these interactions and interdependencies, develops workflows, and identifies resilient paths for human machine teaming using interdependence analysis (IA); define an FVL 'Living Lab' (LL) that the FVL program management office (PMO) could use to explore technical and concept tradeoffs; establish the cost/benefit relationships of these approaches; and design approaches to developing trust within this operating framework. The topic sponsor desires these techniques so as to create a PMO that decreases the speed at which technical tradeoffs can be identified and made.HQMC Aviation (AVN)This research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.
    • …
    corecore