5 research outputs found

    Understanding the Role of Trust in Human-Autonomy Teaming

    Get PDF
    This study aims to better understand trust in human-autonomy teams, finding that trust is related to team performance. A wizard of oz methodology was used in an experiment to simulate an autonomous agent as a team member in a remotely piloted aircraft system environment. Specific focuses of the study were team performance and team social behaviors (specifically trust) of human-autonomy teams. Results indicate 1) that there are lower levels of trust in the autonomous agent in low performing teams than both medium and high performing teams, 2) there is a loss of trust in the autonomous agent across low, medium, and high performing teams over time, and 3) that in addition to the human team members indicating low levels of trust in the autonomous agent, both low and medium performing teams also indicated lower levels of trust in their human team members

    Exploring and Supporting Expert and Novice Reasoning in a Complex and Uncertain Domain: Resolving Labour Disputes.

    Get PDF
    This research aimed to explore and support the reason-based decision making processes of experts and novices in a complex and uncertain domain: resolving labour disputes. Naturalistic Decision Making (NDM) has investigated the role of expertise in complex and uncertain domains that are often time pressured. NDM models typically focus on fast decisions while explaining the reasoning processes behind slower decisions less well. There is much research on expertise, experts’ reasoning on complex problems is less well understood. Therefore, this research aimed to look at experts’ reasoning in slower, reason-based decisions. The first empirical chapter examined how complex labour judgements were made by testing a Mental Model Theory (MMT) of probabilistic reasoning. This was followed by a second empirical chapter, in which participants’ (labour officers) thought processes were elicited using a think aloud protocol. Based on these findings, the thesis then progressed to develop a reasoning aid to support reasoning followed by an evaluation of any changes in reasoning processes and outcomes in the third empirical chapter. The final empirical chapter validated the efficiency of the reasoning aid. Six scenarios were developed to replicate typical labour cases and used in studies to assess reasoning processes on a realistic task. Participants for each study numbered 42, 22, 28 and 82 respectively. The data for Study 1 and 4 were analysed quantitatively, and the verbal protocols for Study 2 and 3 were analysed qualitatively. Verbal protocols were recorded and transcribed, then transcripts were coded based on participants’ reasoning processes. Differences between experienced and less-experienced officers were also tested. Study 1 provided mixed evidence of reasoning according to MMT, finding that experienced and less-experienced officers were not significantly different. In Study 2 the data were analysed using six higher-order codes proposed by Toulmin et al. (1979) and each protocol was drawn into an argument map. This showed that experienced officers drew more accurate conclusions, omitted less evidence and offered more justifications than less-experienced officers. The reasoning aid used in Study 3 improved less-experience officers’ reasoning such that conclusion accuracy was the same as that of experienced officers. However, Study 4 revealed that, while the reasoning aid had no impact on the reasoning processes, the level of experience had a significant effect. This research provides a good description of participants’ reason-based decision making. Toulmin's argument analysis approach provides a unique contribution to understanding reasoning in this realistic and complex task. Although, the reasoning aid reduces the differences between experienced and less-experienced officers, experience still plays a crucial role in ensuring correct outcomes

    Human Autonomy Teaming - The Teamwork of the Future

    Get PDF
    Dies ist ein Herausgeberwerk.Der Zusammenarbeit von Mensch und Technik kommt angesichts technologischer Fortschritte eine immer grĂ¶ĂŸere Bedeutung zu. Das Human Autonomy Teaming (HAT) birgt in diesem Zusammenhang als neue Form der Teamarbeit zwischen menschlichen Teammitgliedern und technischen Einheiten, sogenannten autonomen Agenten, ein großes Potenzial. Der Mensch kooperiert mit seinem technischen Teammitglied und wird von diesem bei gemeinsamen Aufgaben im Team unterstĂŒtzt. Beide Akteure ergĂ€nzen sich mit ihren individuellen StĂ€rken gegenseitig im Team. In diesem Buch sind aktuelle Themen im Rahmen des HAT fĂŒr Forscher/innen und Praktiker/innen ĂŒbersichtlich aufbereitet, um gemeinsam zur erfolgreichen Umsetzung autonomer Agenten als Teammitglied des Menschen im Sinne eines HAT beitragen zu können. In Kapitel 1 wird in das Thema eingeleitet, grundlegende Definitionen und Modelle fĂŒr das gesamte Werk vorgestellt sowie die Potentiale des HAT aufgezeigt. Kapitel 2 thematisiert menschliche und technische Anforderungen fĂŒr erfolgreiches HAT, bevor in Kapitel 3 nĂ€her auf die Zusammenarbeit zwischen Mensch und Technik und die damit einhergehenden StĂ€rken und SchwĂ€chen eingegangen wird. Kapitel 4 liefert Einblicke in aktuelle Anwendungsgebiete des HAT. Abschließend werden in Kapitel 5 zukĂŒnftige Entwicklungen des HAT diskutiert. As a result of technological advances, collaboration between humans and technology is becoming increasingly important. In this context, Human Autonomy Teaming (HAT), as a new form of teamwork between humans and technology, so-called autonomous agents, has great potential and offers many possibilities in research and application. Both team members complement each other with their individual strengths striving to achieve a common goal. In this book, current topics within the framework of the HAT are clearly presented for researchers and practitioners in order to be able to jointly contribute to the successful implementation of autonomous agents as team members in the sense of HAT. Chapter 1 introduces the topic, basic definitions and models for the entire work, and shows the potential of HAT. Chapter 2 deals with human and technological requirements for successful HAT, before chapter 3 goes into more detail on the cooperation between humans and technology and the associated strengths and weaknesses. Chapter 4 provides insights into current fields of application of HAT. Finally, in Chapter 5, future developments of HAT are discussed

    The facilitation of trust in automation: a qualitative study of behaviour and attitudes towards emerging technology in military culture

    Get PDF
    High speciality and criticality domains categorise the most researched areas in the field of Trust in Automation. Minimal studies have explored the nuances of the psycho-social environment and organisational culture in the development of appropriate mental models on dispositional trust. To aid integration of human operators with emergent specialised systems, there is ambition to introduce Human-Human/Human-System analogies with AI Avatars and 3D representations of environments (Ministry of Defence, 2018). Due to the criticisms in the literature of Human-Human and Human-System teaming analogues this research has explored personal narratives of civilians and military personnel about technology, adaptability and how to facilitate beneficial attitudes and behaviours in appropriate trust, reliance and misuse. A subdivision of the research explores the socio-cultural idiosyncrasies within the different echelons of the military as variances in authority and kinship provide insight on informing training targeted to unique domains. The thesis proposes that there are core hindrances to tacit trust facilitation with automation as cognitive rigidity towards individual and group identities impact socially constructed social responses and internal mental models. Furthermore, as automation broaches category boundaries there may be resistance and discomfort as a result of unpredictable social contracts whereby transactional and relational trust-related power dynamics are unknown or unpredictable
    corecore