1,163 research outputs found

    Eliciting usage contexts of safety-critical medical devices

    Get PDF
    This position paper outlines our approach to improve the usage choice of suitable devices in different health care environments (contexts). Safety-critical medical devices are presumed to have undergone a thorough (user-centred) design process to optimize the device for the intended purpose, user group and environment. However, in real-life health care scenarios, actual usage may not reflect the original design parameters. We suggest the identification of further usage contexts for safety-critical medical devices through ethnographic and other studies, to assist better modelling of the challenges of different usage environments. In combination with system and interaction models, these context models can then be used for decision-support in choosing medical devices that are suitable for the intended environment

    Designing multi-sensory displays for abstract data

    Get PDF
    The rapid increase in available information has lead to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays for perceptual data mining. This approach allows a domain expert to search the data for useful relationships and can be effective when automated rules are hard to define. However, designing models of the abstract data and defining appropriate displays are critical tasks in building a useful system. Designing displays of abstract data is especially difficult when multi-sensory interaction is considered. New technology, such as Virtual Environments, enables such multi-sensory interaction. For example, interfaces can be designed that immerse the user in a 3D space and provide visual, auditory and haptic (tactile) feedback. It has been a goal of Virtual Environments to use multi-sensory interaction in an attempt to increase the human-to-computer bandwidth. This approach may assist the user to understand large information spaces and find patterns in them. However, while the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is quite difficult. Designing intuitive multi-sensory displays of abstract data is complex and needs to carefully consider human perceptual capabilities, yet we interact with the real world everyday in a multi-sensory way. Metaphors can describe mappings between the natural world and an abstract information space. This thesis develops a division of the multi-sensory design space called the MS-Taxonomy. The MS-Taxonomy provides a concept map of the design space based on temporal, spatial and direct metaphors. The detailed concepts within the taxonomy allow for discussion of low level design issues. Furthermore the concepts abstract to higher levels, allowing general design issues to be compared and discussed across the different senses. The MS-Taxonomy provides a categorisation of multi-sensory design options. However, to design effective multi-sensory displays requires more than a thorough understanding of design options. It is also useful to have guidelines to follow, and a process to describe the design steps. This thesis uses the structure of the MS-Taxonomy to develop the MS-Guidelines and the MS-Process. The MS-Guidelines capture design recommendations and the problems associated with different design choices. The MS-Process integrates the MS-Guidelines into a methodology for developing and evaluating multi-sensory displays. A detailed case study is used to validate the MS-Taxonomy, the MS-Guidelines and the MS-Process. The case study explores the design of multi-sensory displays within a domain where users wish to explore abstract data for patterns. This area is called Technical Analysis and involves the interpretation of patterns in stock market data. Following the MS-Process and using the MS-Guidelines some new multi-sensory displays are designed for pattern detection in stock market data. The outcome from the case study includes some novel haptic-visual and auditory-visual designs that are prototyped and evaluated

    Quantifying Cognitive Efficiency of Display in Human-Machine Systems

    Get PDF
    As a side effect of fast growing informational technology, information overload becomes prevalent in the operation of many human-machine systems. Overwhelming information can degrade operational performance because it imposes large mental workload on human operators. One way to address this issue is to improve the cognitive efficiency of display. A cognitively efficient display should be more informative while demanding less mental resources so that an operator can process larger displayed information using their limited working memory and achieve better performance. In order to quantitatively evaluate this display property, a Cognitive Efficiency (CE) metric is formulated as the ratio of the measures of two dimensions: display informativeness and required mental resources (each dimension can be affected by display, human, and contextual factors). The first segment of the dissertation discusses the available measurement techniques to construct the CE metric and initially validates the CE metric with basic discrete displays. The second segment demonstrates that displays showing higher cognitive efficiency improve multitask performance. This part also identifies the version of the CE metric that is the most predictive of multitask performance. The last segment of the dissertation applies the CE metric in driving scenarios to evaluate novel speedometer displays; however, it finds that the most efficient display may not better enhance concurrent tracking performance in driving. Although the findings of dissertation show several limitations, they provide valuable insight into the complicated relationship among display, human cognition, and multitask performance in human-machine systems

    Investigating vigilance for auditory, visual, and haptic interfaces in alarm monitoring

    Get PDF
    There are many alarms in healthcare systems that are primarily visual and auditory modalities. Alarms can occur thousands of times a day and can be stressful for clinicians. The overabundance of alarms leads to alarm fatigue. Alarm fatigue is a large patient safety issue as alarms may be silenced or not responded to in a timely manner. Introduction of a new information modality, such as a touchless haptic interface, could mitigate the effects of the vigilance decrement and alarm fatigue because of multiple resource theory and the idea that we have limited cognitive resources. The objective of this work is to investigate the use of a touchless haptic interface in an alarm monitoring vigilance task compared to visual and auditory interfaces. Data was collected on the reaction times of stimuli response to understand cognitive load and the number of correct detections, false positives, and false negatives to understand performance. Participants (N=36) completed a vigilance task in one of the three modality groups where they were asked to identify a stimulus over a 40-minute period. Mixed-effects linear regression models were built to analyze the differences between modalities and blocks. The main finding of this work is that visual interfaces perform best for alarm monitoring compared to auditory and haptic alarms; however, it was also shown that haptic interfaces may have a lower cognitive load compared to auditory interfaces. Therefore, haptic interfaces may be a promising avenue for offsetting information in healthcare alarm monitoring applications

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    Investigating perceptual congruence between information and sensory parameters in auditory and vibrotactile displays

    Get PDF
    A fundamental interaction between a computer and its user(s) is the transmission of information between the two and there are many situations where it is necessary for this interaction to occur non-visually, such as using sound or vibration. To design successful interactions in these modalities, it is necessary to understand how users perceive mappings between information and acoustic or vibration parameters, so that these parameters can be designed such that they are perceived as congruent. This thesis investigates several data-sound and data-vibration mappings by using psychophysical scaling to understand how users perceive the mappings. It also investigates the impact that using these methods during design has when they are integrated into an auditory or vibrotactile display. To investigate acoustic parameters that may provide more perceptually congruent data-sound mappings, Experiments 1 and 2 explored several psychoacoustic parameters for use in a mapping. These studies found that applying amplitude modulation — or roughness — to a signal, or applying broadband noise to it resulted in performance which were similar to conducting the task visually. Experiments 3 and 4 used scaling methods to map how a user perceived a change in an information parameter, for a given change in an acoustic or vibrotactile parameter. Experiment 3 showed that increases in acoustic parameters that are generally considered undesirable in music were perceived as congruent with information parameters with negative valence such as stress or danger. Experiment 4 found that data-vibration mappings were more generalised — a given increase in a vibrotactile parameter was almost always perceived as an increase in an information parameter — regardless of the valence of the information parameter. Experiments 5 and 6 investigated the impact that using results from the scaling methods used in Experiments 3 and 4 had on users' performance when using an auditory or vibrotactile display. These experiments also explored the impact that the complexity of the context which the display was placed had on user performance. These studies found that using mappings based on scaling results did not significantly impact user's performance with a simple auditory display, but it did reduce response times in a more complex use-case

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Future bathroom: A study of user-centred design principles affecting usability, safety and satisfaction in bathrooms for people living with disabilities

    Get PDF
    Research and development work relating to assistive technology 2010-11 (Department of Health) Presented to Parliament pursuant to Section 22 of the Chronically Sick and Disabled Persons Act 197
    • 

    corecore