2 research outputs found

    Imitating Human Responses via a Dual-Process Model Approach

    Get PDF
    Human-autonomous system teaming is becoming more prevalent in the Air Force and in society. Often, the concept of a shared mental model is discussed as a means to enhance collaborative work arrangements between a human and an autonomous system. The idea being that when the models are aligned, the team is more productive due to an increase in trust, predictability, and apparent understanding. This research presents the Dual-Process Model using multivariate normal probability density functions (DPM-MN), which is a cognitive architecture algorithm based on the psychological dual-process theory. The dual-process theory proposes a bipartite decision-making process in people. It labels the intuitive mode as “System 1” and the reflective mode as “System 2”. The current research suggests by leveraging an agent which forms decisions based on a dual-process model, an agent in a human-machine team can maintain a better shared mental model with the user. Evaluation of DPM-MN in a game called Space Navigator shows that DPM-MN presents a successful dual-process theory motivated model

    A Design Thinking Framework for Human-Centric Explainable Artificial Intelligence in Time-Critical Systems

    Get PDF
    Artificial Intelligence (AI) has seen a surge in popularity as increased computing power has made it more viable and useful. The increasing complexity of AI, however, leads to can lead to difficulty in understanding or interpreting the results of AI procedures, which can then lead to incorrect predictions, classifications, or analysis of outcomes. The result of these problems can be over-reliance on AI, under-reliance on AI, or simply confusion as to what the results mean. Additionally, the complexity of AI models can obscure the algorithmic, data and design biases to which all models are subject, which may exacerbate negative outcomes, particularly with respect to minority populations. Explainable AI (XAI) aims to mitigate these problems by providing information on the intent, performance, and reasoning process of the AI. Where time or cognitive resources are limited, the burden of additional information can negatively impact performance. Ensuring XAI information is intuitive and relevant allows the user to quickly calibrate their trust in the AI, in turn improving trust in suggested task alternatives, reducing workload and improving task performance. This study details a structured approach to the development of XAI in time-critical systems based on a design thinking framework that preserves the agile, fast-iterative approach characteristic of design thinking and augments it with practical tools and guides. The framework establishes a focus on shared situational perspective, and the deep understanding of both users and the AI in the empathy phase, provides a model with seven XAI levels and corresponding solution themes, and defines objective, physiological metrics for concurrent assessment of trust and workload
    corecore