486 research outputs found

    Enhancing user experience and safety in the context of automated driving through uncertainty communication

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic of overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. Previous research has indicated that conveying system uncertainties may alleviate these issues. However, existing approaches require drivers to attend the uncertainty information with focal attention, likely resulting in missed changes when engaged in non-driving-related tasks. This research project expands on existing work regarding uncertainty communication in the context of automated driving. Specifically, it aims to investigate the implications of conveying uncertainties under consideration of non-driving-related tasks and, based on the outcomes, develop and evaluate an uncertainty display that enhances both user experience and driving safety. In a first step, the impact of visually conveying uncertainties was investigated under consideration of workload, trust, monitoring behaviour, non-driving-related tasks, takeover performance, and situation awareness. For this, an anthropomorphic visual uncertainty display located in the instrument cluster was developed. While the hypothesised benefits for trust calibration and situation awareness were confirmed, the results indicate that visually conveying uncertainties leads to an increased perceived effort due to a higher frequency of monitoring glances. Building on these findings, peripheral awareness displays were explored as a means for conveying uncertainties without the need for focused attention to reduce monitoring glances. As a prerequisite for developing such a display, a systematic literature review was conducted to identify evaluation methods and criteria, which were then coerced into a comprehensive framework. Grounded in this framework, a peripheral awareness display for uncertainty communication was developed and subsequently compared with the initially proposed visual anthropomorphic uncertainty display in a driving simulator study. Eye tracking and subjective workload data indicate that the peripheral awareness display reduces the monitoring effort relative to the visual display, while driving performance and trust data highlight that the benefits of uncertainty communication are maintained. Further, this research project addresses the implications of increasing the functional detail of uncertainty information. Results of a driving simulator study indicate that particularly workload should be considered when increasing the functional detail of uncertainty information. Expanding upon this approach, an augmented reality display concept was developed and a set of visual variables was explored in a forced choice sorting task to assess their ordinal characteristics. Particularly changes in colour hue and animation-based variables received high preference ratings and were ordered consistently from low to high uncertainty. This research project has contributed a series of novel insights and ideas to the field of human factors in automated driving. It confirmed that conveying uncertainties improves trust calibration and situation awareness, but highlighted that using a visual display lessens the positive effects. Addressing this shortcoming, a peripheral awareness display was designed applying a dedicated evaluation framework. Compared with the previously employed visual display, it decreased monitoring glances and, consequentially, perceived effort. Further, an augmented reality-based uncertainty display concept was developed to minimise the workload increments associated with increases in the functional detail of uncertainty information.</div

    Tissue-conducted spatial sound fields

    Get PDF
    We describe experiments using multiple cranial transducers to achieve auditory spatial perceptual impressions via bone (BC) and tissue conduction (TC), bypassing the peripheral hearing apparatus. This could be useful in cases of peripheral hearing damage or where ear-occlusion is undesirable. Previous work (e.g. Stanley and Walker 2006, MacDonald and Letowski 2006)1,2 indicated robust lateralization is feasible via tissue conduction. We have utilized discrete signals, stereo and first order ambisonics to investigate control of externalization, range, direction in azimuth and elevation, movement and spaciousness. Early results indicate robust and coherent effects. Current technological implementations are presented and potential development paths discussed

    Head-Tracking Haptic Computer Interface for the Blind

    Get PDF
    In today’s heavily technology-dependent society, blind and visually impaired people are becoming increasingly disadvantaged in terms of access to media, information, electronic commerce, communications and social networks. Not only are computers becoming more widely used in general, but their dependence on visual output is increasing, extending the technology further out of reach for those without sight. For example, blindness was less of an obstacle for programmers when command-line interfaces were more commonplace, but with the introduction of Graphical User Interfaces (GUIs) for both development and ïŹnal applications, many blind programmers were made redundant (Alexander, 1998; Siegfried et al., 2004). Not only are images, video and animation heavily entrenched in today’s interfaces, but the visual layout of the interfaces themselves hold important information which is inaccessible to sightless users with existing accessibility technology

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    Principles and Guidelines for Advancement of Touchscreen-Based Non-visual Access to 2D Spatial Information

    Get PDF
    Graphical materials such as graphs and maps are often inaccessible to millions of blind and visually-impaired (BVI) people, which negatively impacts their educational prospects, ability to travel, and vocational opportunities. To address this longstanding issue, a three-phase research program was conducted that builds on and extends previous work establishing touchscreen-based haptic cuing as a viable alternative for conveying digital graphics to BVI users. Although promising, this approach poses unique challenges that can only be addressed by schematizing the underlying graphical information based on perceptual and spatio-cognitive characteristics pertinent to touchscreen-based haptic access. Towards this end, this dissertation empirically identified a set of design parameters and guidelines through a logical progression of seven experiments. Phase I investigated perceptual characteristics related to touchscreen-based graphical access using vibrotactile stimuli, with results establishing three core perceptual guidelines: (1) a minimum line width of 1mm should be maintained for accurate line-detection (Exp-1), (2) a minimum interline gap of 4mm should be used for accurate discrimination of parallel vibrotactile lines (Exp-2), and (3) a minimum angular separation of 4mm should be used for accurate discrimination of oriented vibrotactile lines (Exp-3). Building on these parameters, Phase II studied the core spatio-cognitive characteristics pertinent to touchscreen-based non-visual learning of graphical information, with results leading to the specification of three design guidelines: (1) a minimum width of 4mm should be used for supporting tasks that require tracing of vibrotactile lines and judging their orientation (Exp-4), (2) a minimum width of 4mm should be maintained for accurate line tracing and learning of complex spatial path patterns (Exp-5), and (3) vibrotactile feedback should be used as a guiding cue to support the most accurate line tracing performance (Exp-6). Finally, Phase III demonstrated that schematizing line-based maps based on these design guidelines leads to development of an accurate cognitive map. Results from Experiment-7 provide theoretical evidence in support of learning from vision and touch as leading to the development of functionally equivalent amodal spatial representations in memory. Findings from all seven experiments contribute to new theories of haptic information processing that can guide the development of new touchscreen-based non-visual graphical access solutions

    Feel it in my bones: Composing multimodal experience through tissue conduction

    Get PDF
    We outline here the feasibility of coherently utilising tissue conduction for spatial audio and tactile input. Tissue conduction display-specific compositional concerns are discussed; it is hypothesised that the qualia available through this medium substantively differ from those for conventional artificial means of appealing to auditory spatial perception. The implications include that spatial music experienced in this manner constitutes a new kind of experience, and that the ground rules of composition are yet to be established. We refer to results from listening experiences with one hundred listeners in an unstructured attribute elicitation exercise, where prominent themes such as “strange”, “weird”, “positive”, “spatial” and “vibrations” emerged. We speculate on future directions aimed at taking maximal advantage of the principle of multimodal perception to broaden the informational bandwidth of the display system. Some implications for composition for hearing-impaired are elucidated.n/

    How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality

    Get PDF
    Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings

    Touch-Screen Technology for the Dynamic Display of 2D Spatial Information Without Vision: Promise and Progress

    Get PDF
    Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface

    Measuring relative vibrotactile spatial acuity: effects of tactor type, anchor points and tactile anisotropy

    Get PDF
    Publisher's version (Ăștgefin grein)Vibrotactile displays can compensate for the loss of sensory function of people with permanent or temporary deficiencies in vision, hearing, or balance, and can augment the immersive experience in virtual environments for entertainment, or professional training. This wide range of potential applications highlights the need for research on the basic psychophysics of mechanisms underlying human vibrotactile perception. One key consideration when designing tactile displays is determining the minimal possible spacing between tactile motors (tactors), by empirically assessing the maximal throughput of the skin, or, in other words, vibrotactile spatial acuity. Notably, such estimates may vary by tactor type. We assessed vibrotactile spatial acuity in the lower thoracic region for three different tactor types, each mounted in a 4 × 4 array with center-to-center inter-tactor distances of 25 mm, 20 mm, and 10 mm. Seventeen participants performed a relative three-alternative forced-choice point localization task with successive tactor activation for both vertical and horizontal stimulus presentation. The results demonstrate that specific tactor characteristics (frequency, acceleration, contact area) significantly affect spatial acuity measurements, highlighting that the results of spatial acuity measurements may only apply to the specific tactors tested. Furthermore, our results reveal an anisotropy in vibrotactile perception, with higher spatial acuity for horizontal than for vertical stimulus presentation. The findings allow better understanding of vibrotactile spatial acuity and can be used for formulating guidelines for the design of tactile displays, such as regarding inter-tactor spacing, choice of tactor type, and direction of stimulus presentation.The research leading to these results has received funding from the European Union’s Horizon 2020 Research and Innovation Program under Grant agreement No 643636 “Sound of Vision”.Peer Reviewe

    Electrotactile vision substitution for 3D trajectory following

    Full text link
    Navigation for blind persons represents a challenge for researchers in vision substitution. In this field, one of the used techniques to navigate is guidance. In this study, we develop a new approach for 3D trajectory following in which the requested task is to track a light path using computer input devices (keyboard and mouse) or a rigid body handled in front of a stereoscopic camera. The light path is visualized either on direct vision or by way of a electro-stimulation device, the Tongue Display Unit, a 12x12 matrix of electrodes. We improve our method by a series of experiments in which the effect of the modality of perception and that of the input device. Preliminary results indicated a close correlation between the stimulated and recorded trajectories
    • 

    corecore