5,961 research outputs found

    Multimodal information presentation for high-load human computer interaction

    Get PDF
    This dissertation addresses the question: given an application and an interaction context, how can interfaces present information to users in a way that improves the quality of interaction (e.g. a better user performance, a lower cognitive demand and a greater user satisfaction)? Information presentation is critical to the quality of interaction because it guides, constrains and even determines cognitive behavior. A good presentation is particularly desired in high-load human computer interactions, such as when users are under time pressure, stress, or are multi-tasking. Under a high mental workload, users may not have the spared cognitive capacity to cope with the unnecessary workload induced by a bad presentation. In this dissertation work, the major presentation factor of interest is modality. We have conducted theoretical studies in the cognitive psychology domain, in order to understand the role of presentation modality in different stages of human information processing. Based on the theoretical guidance, we have conducted a series of user studies investigating the effect of information presentation (modality and other factors) in several high-load task settings. The two task domains are crisis management and driving. Using crisis scenario, we investigated how to presentation information to facilitate time-limited visual search and time-limited decision making. In the driving domain, we investigated how to present highly-urgent danger warnings and how to present informative cues that help drivers manage their attention between multiple tasks. The outcomes of this dissertation work have useful implications to the design of cognitively-compatible user interfaces, and are not limited to high-load applications

    How Can Physiological Computing Benefit Human-Robot Interaction?

    Get PDF
    As systems grow more automatized, the human operator is all too often overlooked. Although human-robot interaction (HRI) can be quite demanding in terms of cognitive resources, the mental states (MS) of the operators are not yet taken into account by existing systems. As humans are no providential agents, this lack can lead to hazardous situations. The growing number of neurophysiology and machine learning tools now allows for efficient operators' MS monitoring. Sending feedback on MS in a closed-loop solution is therefore at hands. Involving a consistent automated planning technique to handle such a process could be a significant asset. This perspective article was meant to provide the reader with a synthesis of the significant literature with a view to implementing systems that adapt to the operator's MS to improve human-robot operations' safety and performance. First of all, the need for this approach is detailed as regards remote operation, an example of HRI. Then, several MS identified as crucial for this type of HRI are defined, along with relevant electrophysiological markers. A focus is made on prime degraded MS linked to time-on-task and task demands, as well as collateral MS linked to system outputs (i.e. feedback and alarms). Lastly, the principle of symbiotic HRI is detailed and one solution is proposed to include the operator state vector into the system using a mixed-initiative decisional framework to drive such an interaction

    A Design Thinking Framework for Human-Centric Explainable Artificial Intelligence in Time-Critical Systems

    Get PDF
    Artificial Intelligence (AI) has seen a surge in popularity as increased computing power has made it more viable and useful. The increasing complexity of AI, however, leads to can lead to difficulty in understanding or interpreting the results of AI procedures, which can then lead to incorrect predictions, classifications, or analysis of outcomes. The result of these problems can be over-reliance on AI, under-reliance on AI, or simply confusion as to what the results mean. Additionally, the complexity of AI models can obscure the algorithmic, data and design biases to which all models are subject, which may exacerbate negative outcomes, particularly with respect to minority populations. Explainable AI (XAI) aims to mitigate these problems by providing information on the intent, performance, and reasoning process of the AI. Where time or cognitive resources are limited, the burden of additional information can negatively impact performance. Ensuring XAI information is intuitive and relevant allows the user to quickly calibrate their trust in the AI, in turn improving trust in suggested task alternatives, reducing workload and improving task performance. This study details a structured approach to the development of XAI in time-critical systems based on a design thinking framework that preserves the agile, fast-iterative approach characteristic of design thinking and augments it with practical tools and guides. The framework establishes a focus on shared situational perspective, and the deep understanding of both users and the AI in the empathy phase, provides a model with seven XAI levels and corresponding solution themes, and defines objective, physiological metrics for concurrent assessment of trust and workload
    corecore