1,138 research outputs found

    Agent Transparency for Intelligent Target Identification in the Maritime Domain, and its impact on Operator Performance, Workload and Trust

    Get PDF
    This item is only available electronically.Objective: To examine how increasing the transparency of an intelligent maritime target identification system impacts on operator performance, workload and trust in the intelligent agent. Background: Previous research has shown that operator accuracy improves with increased transparency of an intelligent agent’s decisions and recommendations. This can be at the cost of increased workload and response time, although this has not been found by all studies. Prior studies have predominately focussed on route planning and navigation, and it is unclear if the benefits of agent transparency would apply to other tasks such as target identification. Method: Twenty seven participants were required to identify a number of tracks based on a set of identification criteria and the recommendation of an intelligent agent at three transparency levels in a repeated-measures design. The intelligent agent generated an identification recommendation for each track with different levels of transparency information displayed and participants were required to determine the identity of the track. For each transparency level, 70% of the recommendations made by the intelligent agent were correct, with incorrect recommendation due to additional information that the agent was not aware of, such as information from the ship’s radar. Participants’ identification accuracy and identification time were measured, and surveys on operator subjective workload and subjective trust in the intelligent agent were collected for each transparency level. Results: The results indicated that increased transparency information improved the operators’ sensitivity to the accuracy of the agent’s decisions and produced a greater tendency Agent Transparency for Intelligent Target Identification 33 to accept the agent’s decision. Increased agent transparency facilitated human-agent teaming without increasing workload or response time when correctly accepting the intelligent agent’s decision, but increased the response time when rejecting incorrect intelligent agent’s decisions. Participants also reported a higher level of trust when the intelligent agent was more transparent. Conclusion: This study shows the ability of agent transparency to improve performance without increasing workload. Greater agent transparency is also beneficial in building operator trust in the agent. Application: The current study can inform the design and use of uninhabited vehicles and intelligent agents in the maritime context for target identification. It also demonstrates that providing greater transparency of intelligent agents can improve human-agent teaming performance for a previously unstudied task and domain, and hence suggests broader applicability for the design of intelligent agents.Thesis (M.Psych(Organisational & Human Factors)) -- University of Adelaide, School of Psychology, 201

    Crew Resource Management for Automated Teammates (CRM-A)

    Get PDF
    Crew Resource Management (CRM) is the application of human factors knowledge and skills to ensure that teams make effective use of all resources. This includes ensuring that pilots bring in opinions of other teammates and utilize their unique capabilities. CRM was originally developed 40 years ago in response to a number of airline accidents in which the crew was found to be at fault. The goal was to improve teamwork among airline cockpit crews. The notion of "team" was later expanded to include cabin crew and ground resources. CRM has also been adopted by other industries, most notably medicine. Automation research now finds itself faced with similar issues to those faced by aviation 40 years ago: how to create a more robust system by making full use of both the automation and its human operators. With advances in machine intelligence, processing speed and cheap and plentiful memory, automation has advanced to the point that it can and should be treated as a teammate to fully take advantage of its capabilities and contributions to the system. This area of research is known as Human-Autonomy Teaming (HAT). Research on HAT has identified reusable patterns that can be applied in a wide range of applications. These patterns include features such as bi-directional communication and working agreements. This paper will explore the synergies between CRM and HAT. We believe that HAT research has much to learn from CRM and that there are benefits to expanding CRM to cover automation

    Structuring AI Teammate Communication: An Exploration of AI\u27s Communication Strategies in Human-AI Teams

    Get PDF
    In the past decades, artificial intelligence (AI) has been implemented in various domains to facilitate humans in their work, such as healthcare and the automotive industry. Such application of AI has led to increasing attention on human-AI teaming, where AI closely collaborates with humans as a teammate. AI as a teammate is expected to have the ability to coordinate with humans by sharing task-related information, predicting other teammates’ behaviors, and progressing team tasks accordingly. To complete these team activities effectively, AI teammates must communicate with humans, such as sharing updates and checking team progress. Even though communication is a core element of teamwork that helps to achieve effective coordination, how to design and structure human-AI communication in teaming environments still remains unclear. Given the context-dependent characteristics of communication, research on human-AI teaming communication needs to narrow down and focus on specific communication elements/components, such as the proactivity of communication and communication content. In doing so, this dissertation explores how AI teammates’ communication should be structured by modifying communication components through three studies, each of which details a critical component of effective AI communication: (1) communication proactivity, (2) communication content (explanation), and (3) communication approach (verbal vs. non-verbal). These studies provide insights into how AI teammates’ communication ii can be integrated into teamwork and how to design AI teammate communication in human-AI teaming. Study 1 explores an important communication element, communication proactivity, and its impact on team processes and team performance. Specifically, communication proactivity in this dissertation refers to whether an AI teammate proactively communicates with human teammates, i.e., proactively pushing information to human teammates. Experimental analysis shows that AI teammates’ proactive communication plays a crucial role in impacting human perceptions, such as perceived teammate performance and satisfaction with the teammate. Importantly, teams with a non-proactive communication AI teammate increase team performance more than teams with a proactive communication AI as the human and the AI collaborate more. This study identifies the positive impact of AI being proactive in communication at the initial stage of task coordination, as well as the potential need for AI’s flexibility in their communication proactivity (i.e., once human and AI teammates’ coordination pattern forms, AI can be non-proactive in communication). Study 2 examines communication content by focusing on AI’s explanation and its impact on human perceptions in teaming environments. Results indicate that AI’s explanation, as part of communication content, does not always positively impact human trust in human-AI teaming. Instead, the impact of AI’s explanations on human perceptions depends on specific collaboration scenarios. Specifically, AI’s explanations facilitate trust in the AI teammate when explaining why AI disobeys humans’ orders, but hinder trust when explaining why AI lies to humans. In addition, AI giving an explanation of why they ignored the human teammate’s injury was perceived to be more effective than AI not providing such an explanation. The findings emphasize the context-dependent characteristic of AI’s communication content with a focus on AI’s explanation of their actions. iii Study 3 investigates AI’s communication approach, which was manipulated as verbal vs. non-verbal communication. Results indicate that AI teammates’ verbal/nonverbal communication does not impact human trust in the AI teammate, but facilitates the maintenance of humans’ situation awareness in task coordination. In addition, AI with non-verbal communication is perceived as having lower communication quality and lower performance. Importantly, AI with non-verbal communication has better team performance in human-human-AI teams than human-AI-AI teams, whereas AI with verbal communication has better team performance in human-AI-AI teams than human-human-AI teams. These three studies together address multiple research gaps in human-AI team communication and provide a holistic view of the design and structure of AI’s communication by examining three specific aspects of communication in human-AI teaming. In addition, each study in this dissertation proposes practical design implications on AI’s communication in human-AI teams, which will assist AI designers and developers to create better AI teammates that facilitate humans in teaming environments

    The New Dream Team? A Review of Human-AI Collaboration Research From a Human Teamwork Perspective

    Get PDF
    A new generation of information systems based on artificial intelligence (AI) transforms the way we work. However, existing research on human-AI collaboration is scattered across disciplines, highlighting the need for more transparency about the design of human-AI collaboration in organizational contexts. This paper addresses this gap by reviewing the literature on human-AI collaboration through the lens of human teamwork. Our results provide insights into how emerging topics of human-AI collaboration are connected and influence each other. In particular, the review indicates that, with the increasing complexity of organizational settings, human-AI collaboration needs to be designed differently, and team maintenance activities become more important due to increased communicational requirements of humans. Our main contribution is a novel framework of temporal phases in human-AI collaboration, identifying the mechanisms that need to be considered when designing them for organizational contexts. Additionally, we use our framework to derive a future research agenda
    • …
    corecore