5,136 research outputs found

    Teaming Models with Intelligent Systems at the Workplace

    Get PDF
    Die Wirtschaftsinformatik (WI) hat sich einen festen Platz in den deutschsprachigen Hochschulen gesichert. Mit wachsendem FachkrĂ€ftemangel dĂŒrften ihre BeitrĂ€ge zur Ausbildung von Fachleuten fĂŒr automatische Systeme und zur diesbezĂŒglichen Forschung noch wichtiger werden. In der Öffentlichkeit und z. T. auch in den Fachmedien entspricht ihr Bekanntheitsgrad nicht ihrer gesellschaftlichen Bedeutung. Beispielsweise werden von relativen Laien, z. B. in der Politik, traditionelle GegenstĂ€nde des Faches als Neuentwicklung dargestellt. Wir plĂ€dieren dafĂŒr, dass sich die WI mehr als bisher nicht nur als interdisziplinĂ€res Fach zwischen BWL und Informatik begreift, sondern verstĂ€rkt auch auf Grenzgebieten wie Öffentlicher Verwaltung, Politik und Recht arbeitet. Wege dazu sind u. a. Übertragung von IT-Lösungen aus der Privatwirtschaft in die Öffentliche Verwaltung, Warnungen vor Übertreibung und Moden oder differenzierte Identifikation von Vor- und Nachteilen neuer Methoden im Vergleich zu bekannten. Im akademischen Umfeld ist zu hinterfragen, ob die aktuellen Anreizsysteme fĂŒr Wissenschaftlerinnen und Wissenschaftler förderlich sind

    Application of Human-Autonomy Teaming (HAT) Patterns to Reduce Crew Operations (RCO)

    Get PDF
    Unmanned aerial systems, advanced cockpits, and air traffic management are all seeing dramatic increases in automation. However, while automation may take on some tasks previously performed by humans, humans will still be required to remain in the system for the foreseeable future. The collaboration between humans and these increasingly autonomous systems will begin to resemble cooperation between teammates, rather than simple task allocation. It is critical to understand this human-autonomy teaming (HAT) to optimize these systems in the future. One methodology to understand HAT is by identifying recurring patterns of HAT that have similar characteristics and solutions. This paper applies a methodology for identifying HAT patterns to an advanced cockpit project

    Agent Teaming Situation Awareness (ATSA): A Situation Awareness Framework for Human-AI Teaming

    Full text link
    The rapid advancements in artificial intelligence (AI) have led to a growing trend of human-AI teaming (HAT) in various fields. As machines continue to evolve from mere automation to a state of autonomy, they are increasingly exhibiting unexpected behaviors and human-like cognitive/intelligent capabilities, including situation awareness (SA). This shift has the potential to enhance the performance of mixed human-AI teams over all-human teams, underscoring the need for a better understanding of the dynamic SA interactions between humans and machines. To this end, we provide a review of leading SA theoretical models and a new framework for SA in the HAT context based on the key features and processes of HAT. The Agent Teaming Situation Awareness (ATSA) framework unifies human and AI behavior, and involves bidirectional, and dynamic interaction. The framework is based on the individual and team SA models and elaborates on the cognitive mechanisms for modeling HAT. Similar perceptual cycles are adopted for the individual (including both human and AI) and the whole team, which is tailored to the unique requirements of the HAT context. ATSA emphasizes cohesive and effective HAT through structures and components, including teaming understanding, teaming control, and the world, as well as adhesive transactive part. We further propose several future research directions to expand on the distinctive contributions of ATSA and address the specific and pressing next steps.Comment: 52 pages,5 figures, 1 tabl

    From Tools to Teammates: Conceptualizing Humans’ Perception of Machines as Teammates with a Systematic Literature Review

    Get PDF
    The accelerating capabilities of systems brought about by advances in Artificial Intelligence challenge the traditional notion of systems as tools. Systems’ increasingly agentic and collaborative character offers the potential for a new user-system interaction paradigm: Teaming replaces unidirectional system use. Yet, extant literature addresses the prerequisites for this new interaction paradigm inconsistently, often not even considering the foundations established in human teaming literature. To address this, this study utilizes a systematic literature review to conceptualize the drivers of the perception of systems as teammates instead of tools. Hereby, it integrates insights from the dispersed and interdisciplinary field of human-machine teaming with established human teaming principles. The creation of a team setting and a social entity, as well as specific configurations of the machine teammate’s collaborative behaviors, are identified as main drivers of the formation of impactful human-machine teams

    The Impact of Coordination Quality on Coordination Dynamics and Team Performance: When Humans Team with Autonomy

    Get PDF
    abstract: This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that adding a synthetic agent as a team member may lead teams to demonstrate different coordination patterns resulting in differences in team cognition and ultimately team effectiveness. The theory of Interactive Team Cognition (ITC) emphasizes the importance of team interaction behaviors over the collection of individual knowledge. In this dissertation, Nonlinear Dynamical Methods (NDMs) were applied to capture characteristics of overall team coordination and communication behaviors. The findings supported the hypothesis that coordination stability is related to team performance in a nonlinear manner with optimal performance associated with moderate stability coupled with flexibility. Thus, we need to build mechanisms in HATs to demonstrate moderately stable and flexible coordination behavior to achieve team-level goals under routine and novel task conditions.Dissertation/ThesisDoctoral Dissertation Engineering 201

    Analysis of Human and Agent Characteristics on Human-Agent Team Performance and Trust

    Get PDF
    The human-agent team represents a new construct in how the United States Department of Defense is orchestrating mission planning and mission accomplishment. In order for mission planning and accomplishment to be successful, several requirements must be met: a firm understanding of human trust in automated agents, how human and automated agent characteristics influence human-agent team performance, and how humans behave. This thesis applies a combination of modeling techniques and human experimentation to understand the concepts aforementioned. The modeling techniques used include static modeling in SysML activity diagrams and dynamic modeling of both human and agent behavior in IMPRINT. Additionally, this research consisted of human experimentation in a dynamic, event-driven, teaming environment known as Space Navigator. Both the modeling and the experimenting depict that the agent\u27s reliability has a significant effect upon the human-agent team performance. Additionally, this research found that the age, gender, and education level of the human user has a relationship with the perceived trust the user has in the agent. Finally, it was found that patterns of compliant human behavior, archetypes, can be created to classify human users

    When AI joins the Team: A Literature Review on Intragroup Processes and their Effect on Team Performance in Team-AI Collaboration

    Get PDF
    Although systems based on artificial intelligence (AI) can collaborate with humans on various complex tasks, little is known about how AI systems can successfully collaborate with human teams (team-AI collaboration). Team performance research states that team composition and intragroup processes are important predictors of team performance. However, it is not clear how intragroup processes differ in team-AI collaboration from human teams and if this is reflected in differences in team performance. To answer these questions, we synthesize evidence from 18 empirical articles. Results indicate that intragroup processes like communication and coordination are less effective in team-AI collaboration. Moreover, whether team cognition and trust are higher in team-AI collaboration compared to human teams is not clear, since studies find conflicting results. Likewise, the results on team performance differences between team-AI collaboration and human teams are inconsistent. With this article we offer a foundation for future research on team-AI collaboration
    • 

    corecore