7,429 research outputs found

    Towards memory supporting personal information management tools

    Get PDF
    In this article we discuss re-retrieving personal information objects and relate the task to recovering from lapse(s) in memory. We propose that fundamentally it is lapses in memory that impede users from successfully re-finding the information they need. Our hypothesis is that by learning more about memory lapses in non-computing contexts and how people cope and recover from these lapses, we can better inform the design of PIM tools and improve the user's ability to re-access and re-use objects. We describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, we present a series of principles that we hypothesize will improve the design of personal information management tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to our findings. The evaluation suggests that users' performance when re-finding objects can be improved by building personal information management tools to support characteristics of human memory

    Multi-modal information processing for visual workload relief

    Get PDF
    The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices

    Reachable but not receptive: enhancing smartphone interruptibility prediction by modelling the extent of user engagement with notifications

    Get PDF
    Smartphone notifications frequently interrupt our daily lives, often at inopportune moments. We propose the decision-on-information-gain model, which extends the existing data collection convention to capture a range of interruptibility behaviour implicitly. Through a six-month in-the-wild study of 11,346 notifications, we find that this approach captures up to 125% more interruptibility cases. Secondly, we find different correlating contextual features for different behaviour using the approach and find that predictive models can be built with >80% precision for most users. However we note discrepancies in performance across labelling, training, and evaluation methods, creating design considerations for future systems

    Occlusion-related lateral connections stabilize kinetic depth stimuli through perceptual coupling

    Get PDF
    Local sensory information is often ambiguous forcing the brain to integrate spatiotemporally separated information for stable conscious perception. Lateral connections between clusters of similarly tuned neurons in the visual cortex are a potential neural substrate for the coupling of spatially separated visual information. Ecological optics suggests that perceptual coupling of visual information is particularly beneficial in occlusion situations. Here we present a novel neural network model and a series of human psychophysical experiments that can together explain the perceptual coupling of kinetic depth stimuli with activity-driven lateral information sharing in the far depth plane. Our most striking finding is the perceptual coupling of an ambiguous kinetic depth cylinder with a coaxially presented and disparity defined cylinder backside, while a similar frontside fails to evoke coupling. Altogether, our findings are consistent with the idea that clusters of similarly tuned far depth neurons share spatially separated motion information in order to resolve local perceptual ambiguities. The classification of far depth in the facilitation mechanism results from a combination of absolute and relative depth that suggests a functional role of these lateral connections in the perception of partially occluded objects

    Optimizing The Design Of Multimodal User Interfaces

    Get PDF
    Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments
    • …
    corecore