7 research outputs found

    An Empirical Study of Hear-Through Augmented Reality: Using Bone Conduction to Deliver Spatialized Audio

    Full text link

    Enabling audio-haptics

    Get PDF
    This thesis deals with possible solutions to facilitate orientation, navigation and overview of non-visual interfaces and virtual environments with the help of sound in combination with force-feedback haptics. Applications with haptic force-feedback, s

    Tools for urban sound quality assessment

    Get PDF

    An audio architecture integrating sound and live voice for virtual environments

    Get PDF
    The purpose behind this thesis was to design and implement audio system architecture, both in hardware and in software, for use in virtual environments. The hardware and software design requirements were to provide the ability to add sounds, environmental effects such as reverberation and occlusion, and live streaming voice to any virtual environment employing this architecture. Several free or open-source sound APIs were evaluated, and DirectSound3D was selected as the core component of the audio architecture. Creative Labs Environmental Audio Extensions (EAX) was integrated into the architecture to provide environmental effects such as reverberation, occlusion, obstruction, and exclusion. Voice over IP (VoIP) technology was evaluated to provide live, streaming voice to any virtual environment. DirectVoice was selected as the voice component of the architecture due to its integration with DirectSound3D . However, extremely high latency considerations with DirectVoice, and any other VoIP application or software, required further research into alternative live voice architectures for inclusion in virtual environments. Ausim3D's GoldServe Audio Localizing Audio Server System was evaluated and integrated into the hardware component of the audio architecture to provide an extremely low-latency, live, streaming voice capability.http://archive.org/details/anudiorchitectur109454977Commander, United States Naval ReserveApproved for public release; distribution is unlimited

    Large Deformation Diffeomorphic Metric Mapping Provides New Insights into the Link Between Human Ear Morphology and the Head-Related Transfer Functions

    Get PDF
    The research findings presented in this thesis is composed of four sections. In the first section of this thesis, it is shown how LDDMM can be applied to deforming head and ear shapes in the context of morphoacoustic study. Further, tools are developed to measure differences in 3D shapes using the framework of currents and also to compare and measure the differences between the acoustic responses obtained from BEM simulations for two ear shapes. Finally this section introduces the multi-scale approach for mapping ear shapes using LDDMM. The second section of the thesis estimates a template ear, head and torso shape from the shapes available in the SYMARE database. This part of the thesis explains a new procedure for developing the template ear shape. The template ear and head shapes were are verified by comparing the features in the template shapes to corresponding features in the CIPIC and SYMARE database population. The third section of the thesis examines the quality of the deformations from the template ear shape to target ears in SYMARE from both an acoustic and morphological standpoint. As a result of this investigation, it was identified that ear shapes can be studied more accurately by the use of two physical scales and that scales at which the ear shapes were studied were dependent on the parameters chosen when mapping ears in the LDDMM framework. Finally, this section concludes by noting how shape distances vary with the acoustic distances using the developed tools. In the final part of this thesis, the variations in the morphology of ears are examined using the Kernel Principle Component Analysis (KPCA) and the changes in the corresponding acoustics are studied using the standard principle component analysis (PCA). These examinations involved identifying the number of kernel principle components that are required in order to model ear shapes with an acceptable level of accuracy, both morphologically and acoustically

    Towards a general model for the design of virtual reality learning environments

    Get PDF
    Virtual reality (VR) has been described as a new and unique type of learning media primarily because it encourages active participation. However, a large number of VR worlds are barely more than passive 3D graphic visualisations. This might be due to the lack of guidelines for the design of interactive worlds, or to the learning preferences of the designers themselves. The literature indicates a number of principles, especially in the area of VR design and learning theory that could form the basis of appropriate design guidelines and this thesis presents these as a set of guidelines for VR designers. There is a lack of information about the learning preferences of VR designers or the design of appropriate help systems for VR learning media so four additional fieldwork studies were carried out to investigate the learning styles, communication styles, attitudes towards the use of VR in learning and training situations, and preferences for the design and use of VR help systems using a sample of VR designers and VR design students. The results indicated that the learning style and communication profiles of VR designers may not be suitable for the design of active learning material. It was also found that VR designers had positive attitudes towards the development of VR in general but less so for learning situations. VR designers tended to provide mainly text-based (visual) instruction in their designs, which may be linked to their predominantly visual learning modalities. However, the results suggested that visual-dominant VR design students were equally likely to prefer voiced (auditory) instructions when used naturally within a VR world. The findings from these four studies were incorporated into a broad set of top-level guidelines that form the first step towards a general model for the design of active, participatory VR learning environments

    Beyond speech intelligibility and speech quality: measuring listening effort with an auditory flanker task

    Get PDF
    If listening to speech against a background of noise increases listening effort, then the effectiveness of a speech technology designed to reduce background noise could be measured by the reduction in listening effort it provides. Reports of increased listening effort in environments with greater background noise have been linked to accompanying decreases in performance (e.g., slower responses and more errors) which are commonly attributed to the increased demands placed on limited cognitive resources in these challenging listening environments, particularly when performing more than one task. As these cognitive resources are also implicated in maintaining attention and reducing distraction, the work reported here proposes to measure listening effort by measuring changes in distraction while listening to noisy and digitally-noise-reduced speech using an auditory flanker task designed to simulate an everyday situation: listening on the telephone. Over a series of experiments this novel listening effort measure is enhanced by the inclusion of a simultaneous memory task and contrasted with listening effort ratings and conventional speech technology evaluation measures (intelligibility and speech quality). However, while there are indications that increased background noise can increase listening effort and digital noise reduction fails to reverse this effect, the results are not consistent. These equivocal results are discussed in light of the recent surge of interest in listening effort research
    corecore