15,173 research outputs found

    A psychology literature study on modality related issues for multimodal presentation in crisis management

    Get PDF
    The motivation of this psychology literature study is to obtain modality related guidelines for real-time information presentation in crisis management environment. The crisis management task is usually companied by time urgency, risk, uncertainty, and high information density. Decision makers (crisis managers) might undergo cognitive overload and tend to show biases in their performances. Therefore, the on-going crisis event needs to be presented in a manner that enhances perception, assists diagnosis, and prevents cognitive overload. To this end, this study looked into the modality effects on perception, cognitive load, working memory, learning, and attention. Selected topics include working memory, dual-coding theory, cognitive load theory, multimedia learning, and attention. The findings are several modality usage guidelines which may lead to more efficient use of the user’s cognitive capacity and enhance the information perception

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Towards a Unified Knowledge-Based Approach to Modality Choice

    Get PDF
    This paper advances a unified knowledge-based approach to the process of choosing the most appropriate modality or combination of modalities in multimodal output generation. We propose a Modality Ontology (MO) that models the knowledge needed to support the two most fundamental processes determining modality choice – modality allocation (choosing the modality or set of modalities that can best support a particular type of information) and modality combination (selecting an optimal final combination of modalities). In the proposed ontology we model the main levels which collectively determine the characteristics of each modality and the specific relationships between different modalities that are important for multi-modal meaning making. This ontology aims to support the automatic selection of modalities and combinations of modalities that are suitable to convey the meaning of the intended message

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    Multimodal imaging of human brain activity: rational, biophysical aspects and modes of integration

    Get PDF
    Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship

    Automatic design of multimodal presentations

    Get PDF
    We describe our attempt to integrate multiple AI components such as planning, knowledge representation, natural language generation, and graphics generation into a functioning prototype called WIP that plans and coordinates multimodal presentations in which all material is generated by the system. WIP allows the generation of alternate presentations of the same content taking into account various contextual factors such as the user\u27s degree of expertise and preferences for a particular output medium or mode. The current prototype of WIP generates multimodal explanations and instructions for assembling, using, maintaining or repairing physical devices. This paper introduces the task, the functionality and the architecture of the WIP system. We show that in WIP the design of a multimodal document is viewed as a non-monotonic process that includes various revisions of preliminary results, massive replanning and plan repairs, and many negotiations between design and realization components in order to achieve an optimal division of work between text and graphics. We describe how the plan-based approach to presentation design can be exploited so that graphics generation influences the production of text and vice versa. Finally, we discuss the generation of cross-modal expressions that establish referential relationships between text and graphics elements
    corecore