435 research outputs found

    Psychophysical Evaluation of Three-Dimensional Auditory Displays

    Get PDF
    This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system

    Current Use and Future Perspectives of Spatial Audio Technologies in Electronic Travel Aids

    Get PDF
    Electronic travel aids (ETAs) have been in focus since technology allowed designing relatively small, light, and mobile devices for assisting the visually impaired. Since visually impaired persons rely on spatial audio cues as their primary sense of orientation, providing an accurate virtual auditory representation of the environment is essential. This paper gives an overview of the current state of spatial audio technologies that can be incorporated in ETAs, with a focus on user requirements. Most currently available ETAs either fail to address user requirements or underestimate the potential of spatial sound itself, which may explain, among other reasons, why no single ETA has gained a widespread acceptance in the blind community. We believe there is ample space for applying the technologies presented in this paper, with the aim of progressively bridging the gap between accessibility and accuracy of spatial audio in ETAs.This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement no. 643636.Peer Reviewe

    Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces

    Get PDF
    Visually impaired people rely upon audition for a variety of purposes, among these are the use of sound to identify the position of objects in their surrounding environment. This is limited not just to localising sound emitting objects, but also obstacles and environmental boundaries, thanks to their ability to extract information from reverberation and sound reflections- all of which can contribute to effective and safe navigation, as well as serving a function in certain assistive technologies thanks to the advent of binaural auditory virtual reality. It is known that head movements in the presence of sound elicit changes in the acoustical signals which arrive at each ear, and these changes can improve common auditory localisation problems in headphone-based auditory virtual reality, such as front-to-back reversals. The goal of the work presented here is to investigate whether the visually impaired naturally engage head movement to facilitate auditory perception and to what extent it may be applicable to the design of virtual auditory assistive technology. Three novel experiments are presented; a field study of head movement behaviour during navigation, a questionnaire assessing the self-reported use of head movement in auditory perception by visually impaired individuals (each comparing visually impaired and sighted participants) and an acoustical analysis of inter-aural differences and cross- correlations as a function of head angle and sound source distance. It is found that visually impaired people self-report using head movement for auditory distance perception. This is supported by head movements observed during the field study, whilst the acoustical analysis showed that interaural correlations for sound sources within 5m of the listener were reduced as head angle or distance to sound source were increased, and that interaural differences and correlations in reflected sound were generally lower than that of direct sound. Subsequently, relevant guidelines for designers of assistive auditory virtual reality are proposed

    Electrotactile Communication via Matrix Electrode Placed on the Torso Using Fast Calibration, and Static vs. Dynamic Encoding

    Get PDF
    Electrotactile stimulation is a technology that reproducibly elicits tactile sensations and can be used as an alternative channel to communicate information to the user. The presented work is a part of an effort to develop this technology into an unobtrusive communication tool for first responders. In this study, the aim was to compare the success rate (SR) between discriminating stimulation at six spatial locations (static encoding) and recognizing six spatio-temporal patterns where pads are activated sequentially in a predetermined order (dynamic encoding). Additionally, a procedure for a fast amplitude calibration, that includes a semi-automated initialization and an optional manual adjustment, was employed and evaluated. Twenty subjects, including twelve first responders, participated in the study. The electrode comprising the 3 × 2 matrix of pads was placed on the lateral torso. The results showed that high SRs could be achieved for both types of message encoding after a short learning phase; however, the dynamic approach led to a statistically significant improvement in messages recognition (SR of 93.3%), compared to static stimulation (SR of 83.3%). The proposed calibration procedure was also effective since in 83.8% of the cases the subjects did not need to adjust the stimulation amplitude manually

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Enabling the effective application of spatial auditory displays in modern flight decks

    Get PDF

    Human sound localisation cues and their relation to morphology

    Get PDF
    Binaural soundfield reproduction has the potential to create realistic threedimensional sound scenes using only a pair of normal headphones. Possible applications for binaural audio abound in, for example, the music, mobile communications and games industries. A problem exists, however, in that the head-related transfer functions (HRTFs) which inform our spatial perception of sound are affected by variations in human morphology, particularly in the shape of the external ear. It has been observed that HRTFs simply based on some kind of average head shape generally result in poor elevation perception, weak externalisation and spectrally distorted sound images. Hence, HRTFs are needed which accommodate these individual differences. Direct acoustic measurement and acoustic simulations based on morphological measurements are obvious means of obtaining individualised HRTFs, but both methods suffer from high cost and practical difficulties. The lack of a viable measurement method is currently hindering the widespread adoption of binaural technologies. There have been many attempts to estimate individualised HTRFs effectively and cheaply using easily obtainable morphological descriptors, but due to an inadequate understanding of the complex acoustic effects created in particular by the external ear, success has been limited. The work presented in this thesis strengthens current understanding in several ways and provides a promising route towards improved HRTF estimation. The way HRTFs vary as a function of direction is compared with localisation acuity to help pinpoint spectral features which contribute to spatial perception. 50 subjects have been scanned using magnetic resonance imaging to capture their head and pinna morphologies, and HRTFs for the same group have been measured acoustically. To make analysis of this extensive data tractable, and so reveal the mapping between the morphological and acoustic domains, a parametric method for efficiently describing head morphology has been developed. Finally, a novel technique, referred to as morphoacoustic perturbation analysis (MPA), is described. We demonstrate how MPA allows the morphological origin of a variety of HRTF spectral features to be identified

    Learning to see and hear in 3D: Virtual reality as a platform for multisensory perceptual learning

    Get PDF
    Virtual reality (VR) is an emerging technology which allows for the presentation of immersive and realistic yet tightly controlled audiovisual scenes. In comparison to conventional displays, the VR system can include depth, 3D audio, fully integrated eye, head, and hand tracking, all over a much larger field of view than a desktop monitor provides. These properties demonstrate great potential for use in vision science experiments, especially those that can benefit from more naturalistic stimuli, particularly in the case of visual rehabilitation. Prior work using conventional displays has demonstrated that that visual loss due to stroke can be partially rehabilitated through laboratory-based tasks designed to promote long-lasting changes to visual sensitivity. In this work, I will explore how VR can provide a platform for new, more complex training paradigms which leverage multisensory stimuli. In this dissertation, I will (I) provide context to motivate the use of multisensory perceptual training in the context of visual rehabilitation, (II) demonstrate best practices for the appropriate use of VR in a controlled psychophysics setting, (III) describe a prototype integrated hardware system for improved eye tracking in VR, and (IV, V) discuss results from two audiovisual perceptual training studies, one using multisensory stimuli and the other with cross-modal audiovisual stimuli. This dissertation provides the foundation for future work in rehabilitating visual deficits, by both improving the hardware and software systems used to present the training paradigm as well as validating new techniques which use multisensory training not previously accessible with conventional desktop displays
    corecore