19 research outputs found

    Critical Review in Computer Science: Identification of the Relationship of the Social and Human Factors Related to Teamwork with Software Development Team’s Productivity

    Get PDF
    This research aims to explore crucial matters related to software development team collaboration that affects productivity. The collaboration component will be discussed in this research including attitude competencies, skill competencies, and knowledge competencies. The results of this study indicate that the components and relationships and human factors influence the productivity of the software development team

    Understanding the Role of Trust in Human-Autonomy Teaming

    Get PDF
    This study aims to better understand trust in human-autonomy teams, finding that trust is related to team performance. A wizard of oz methodology was used in an experiment to simulate an autonomous agent as a team member in a remotely piloted aircraft system environment. Specific focuses of the study were team performance and team social behaviors (specifically trust) of human-autonomy teams. Results indicate 1) that there are lower levels of trust in the autonomous agent in low performing teams than both medium and high performing teams, 2) there is a loss of trust in the autonomous agent across low, medium, and high performing teams over time, and 3) that in addition to the human team members indicating low levels of trust in the autonomous agent, both low and medium performing teams also indicated lower levels of trust in their human team members

    From Tools to Teammates: Conceptualizing Humans’ Perception of Machines as Teammates with a Systematic Literature Review

    Get PDF
    The accelerating capabilities of systems brought about by advances in Artificial Intelligence challenge the traditional notion of systems as tools. Systems’ increasingly agentic and collaborative character offers the potential for a new user-system interaction paradigm: Teaming replaces unidirectional system use. Yet, extant literature addresses the prerequisites for this new interaction paradigm inconsistently, often not even considering the foundations established in human teaming literature. To address this, this study utilizes a systematic literature review to conceptualize the drivers of the perception of systems as teammates instead of tools. Hereby, it integrates insights from the dispersed and interdisciplinary field of human-machine teaming with established human teaming principles. The creation of a team setting and a social entity, as well as specific configurations of the machine teammate’s collaborative behaviors, are identified as main drivers of the formation of impactful human-machine teams

    Trust is Not Enough: Examining the Role of Distrust in Human-Autonomy Teams

    Get PDF
    As automation solutions in manufacturing grow more accessible, there are consistent calls to augment capabilities of humans through the use of autonomous agents, leading to human-autonomy teams (HATs). Many constructs from the human-human teaming literatures are being studied in the context of HATs, such as affective emergent states. Among these, trust has been demonstrated to play a critical role in both human teams and HATs, particularly when considering the reliability of the agent performance. However, the HAT literature fails to account for the distinction between trust and distrust. Consequently, this study investigates the effects of both trust and distrust in HATs in order to broaden the current understanding of trust dynamics in HATs and improve team functioning. Findings were inclusive, but a path forward was discussed regarding self-report and unobtrusive measures of trust and distrust in HATs

    The Impact of Coordination Quality on Coordination Dynamics and Team Performance: When Humans Team with Autonomy

    Get PDF
    abstract: This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that adding a synthetic agent as a team member may lead teams to demonstrate different coordination patterns resulting in differences in team cognition and ultimately team effectiveness. The theory of Interactive Team Cognition (ITC) emphasizes the importance of team interaction behaviors over the collection of individual knowledge. In this dissertation, Nonlinear Dynamical Methods (NDMs) were applied to capture characteristics of overall team coordination and communication behaviors. The findings supported the hypothesis that coordination stability is related to team performance in a nonlinear manner with optimal performance associated with moderate stability coupled with flexibility. Thus, we need to build mechanisms in HATs to demonstrate moderately stable and flexible coordination behavior to achieve team-level goals under routine and novel task conditions.Dissertation/ThesisDoctoral Dissertation Engineering 201

    When AI joins the Team: A Literature Review on Intragroup Processes and their Effect on Team Performance in Team-AI Collaboration

    Get PDF
    Although systems based on artificial intelligence (AI) can collaborate with humans on various complex tasks, little is known about how AI systems can successfully collaborate with human teams (team-AI collaboration). Team performance research states that team composition and intragroup processes are important predictors of team performance. However, it is not clear how intragroup processes differ in team-AI collaboration from human teams and if this is reflected in differences in team performance. To answer these questions, we synthesize evidence from 18 empirical articles. Results indicate that intragroup processes like communication and coordination are less effective in team-AI collaboration. Moreover, whether team cognition and trust are higher in team-AI collaboration compared to human teams is not clear, since studies find conflicting results. Likewise, the results on team performance differences between team-AI collaboration and human teams are inconsistent. With this article we offer a foundation for future research on team-AI collaboration

    Design for Acceptance and Intuitive Interaction: Teaming Autonomous Aerial Systems with Non-experts

    Get PDF
    In recent years, rapid developments in artificial intelligence (AI) and robotics have enabled transportation systems such as delivery drones to strive for ever-higher levels of autonomy and improve infrastructure in many industries. Consequently, the significance of interaction between autonomous systems and humans with little or no experience is steadily rising. While acceptance of delivery drones remains low among the general public, a solution for intuitive interaction with autonomous drones to retrieve packages is urgently needed so that non-experts can also benefit from the technology. We apply a design science research approach and develop a mobile application as a solution instantiation for both challenges. We conduct one expert and one non-expert design cycle to integrate necessary domain knowledge and ensure acceptance of the artifact by potential non-expert users. The results show that teaming of non-experts with complex autonomous systems requires rethinking common design requirements, such as ensuring transparency of AI-based decisions

    Agent Teaming Situation Awareness (ATSA): A Situation Awareness Framework for Human-AI Teaming

    Full text link
    The rapid advancements in artificial intelligence (AI) have led to a growing trend of human-AI teaming (HAT) in various fields. As machines continue to evolve from mere automation to a state of autonomy, they are increasingly exhibiting unexpected behaviors and human-like cognitive/intelligent capabilities, including situation awareness (SA). This shift has the potential to enhance the performance of mixed human-AI teams over all-human teams, underscoring the need for a better understanding of the dynamic SA interactions between humans and machines. To this end, we provide a review of leading SA theoretical models and a new framework for SA in the HAT context based on the key features and processes of HAT. The Agent Teaming Situation Awareness (ATSA) framework unifies human and AI behavior, and involves bidirectional, and dynamic interaction. The framework is based on the individual and team SA models and elaborates on the cognitive mechanisms for modeling HAT. Similar perceptual cycles are adopted for the individual (including both human and AI) and the whole team, which is tailored to the unique requirements of the HAT context. ATSA emphasizes cohesive and effective HAT through structures and components, including teaming understanding, teaming control, and the world, as well as adhesive transactive part. We further propose several future research directions to expand on the distinctive contributions of ATSA and address the specific and pressing next steps.Comment: 52 pages,5 figures, 1 tabl
    corecore