4 research outputs found

    Reducing Cognitive Load Using Adaptive Uncertainty Visualization

    Get PDF
    Uncertainty is inherent in many real-world settings; for example, in a combat situation, darkness may prevent a soldier from classifying approaching troops as friendly or hostile. In an environment plagued with uncertainty, decision-support systems, such as sensor-based networks, may make faulty assumptions about field conditions, especially when information is incomplete, or sensor operations are disrupted. Displaying the factors that contribute to uncertainty informs the decision-making process for a human operator, but at the expense of limited cognitive resources, such as attention, memory, and workload. This research applied principles of perceptual cognition to human-computer interface design to introduce uncertainty visualizations in an adaptive approach that improved the operator\u27s decision-making process, without unduly burdening the operator\u27s cognitive load. An adaptive approach to uncertainty visualization considers the cognitive burden of all visualizations, and reduces the visualizations according to relevancy as the user\u27s cognitive load increases. Experiments were performed using 24 volunteer participants using a simulated environment that featured both intrinsic load, and characteristics of uncertainty. The experiments conclusively demonstrated that adaptive uncertainty visualization reduced the cognitive burden on the operator\u27s attention, memory, and workload, resulting in increased accuracy rates, faster response times, and a higher degree of user satisfaction. This research adds to the body of knowledge regarding the use of uncertainty visualization in the context of cognitive load. Existing research has not identified techniques to support uncertainty visualization, without further burdening cognitive load. This research identified principles, such as goal-oriented visualization, and salience, which promote the use of uncertainty visualization for improved decision-making without increasing cognitive load. This research has extensive significance in fields where both uncertainty and cognitive load factors can reduce the effectiveness of decision-makers, such as sensor-based systems used in the military, or in first-responder situations

    Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods

    Get PDF
    Riedenklau E. Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods. Bielefeld: Universität Bielefeld; 2016.Making information understandable and literally graspable is the main goal of tangible interaction research. By giving digital data physical representations (Tangible User Interface Objects, or TUIOs), they can be used and manipulated like everyday objects with the users’ natural manipulation skills. Such physical interaction is basically of uni-directional kind, directed from the user to the system, limiting the possible interaction patterns. In other words, the system has no means to actively support the physical interaction. Within the frame of tabletop tangible user interfaces, this problem was addressed by the introduction of actuated TUIOs, that are controllable by the system. Within the frame of this thesis, we present the development of our own actuated TUIOs and address multiple interaction concepts we identified as research gaps in literature on actuated Tangible User Interfaces (TUIs). Gestural interaction is a natural means for humans to non-verbally communicate using their hands. TUIs should be able to support gestural interaction, since our hands are already heavily involved in the interaction. This has rarely been investigated in literature. For a tangible social network client application, we investigate two methods for collecting user-defined gestures that our system should be able to interpret for triggering actions. Versatile systems often understand a wide palette of commands. Another approach for triggering actions is the use of menus. We explore the design space of menu metaphors used in TUIs and present our own actuated dial-based approach. Rich interaction modalities may support the understandability of the represented data and make the interaction with them more appealing, but also mean high demands on real-time precessing. We highlight new research directions for integrated feature rich and multi-modal interaction, such as graphical display, sound output, tactile feedback, our actuated menu and automatically maintained relations between actuated TUIOs within a remote collaboration application. We also tackle the introduction of further sophisticated measures for the evaluation of TUIs to provide further evidence to the theories on tangible interaction. We tested our enhanced measures within a comparative study. Since one of the key factors in effective manual interaction is speed, we benchmarked both the human hand’s manipulation speed and compare it with the capabilities of our own implementation of actuated TUIOs and the systems described in literature. After briefly discussing applications that lie beyond the scope of this thesis, we conclude with a collection of design guidelines gathered in the course of this work and integrate them together with our findings into a larger frame
    corecore