11,267 research outputs found

    Sound effect metaphors for near field distance sonification

    Get PDF
    International audienceThis article presents a concept of distance sound source sonification for virtual auditory displays in the context of the creation of an assistive device for the visually impaired. In order to respond to user needs, three sonification metaphors of distance based on sound effects were designed. These metaphors can be applied to any type of sound and thereby satisfy all aesthetic desires of users. The paper describes the motivation to use this new type of sonification based on sound effects, and proposes guidelines for the creation of these three metaphors. It then presents a user evaluation of these metaphors by 16 subjects through a near field sound localization experiment. The experiment included a simple binaural rendering condition in order to compare and quantify the contribution of each metaphor on the distance perception

    Interaction between auditory and visual perceptions on distance estimations in a virtual environment

    No full text
    International audienceNavigation in virtual environments relies on an accurate spatial rendering. A virtual object is localized according to its position in the environment, which is usually defined by the following three coordinates: azimuth, elevation and distance. Even though several studies investigated the perception of auditory and visual cues in azimuth and elevation, little has been made on the distance dimension. This study aims at investigating the way humans estimate visual and auditory egocentric distances of virtual objects. Subjects were asked to estimate the egocentric distance of 2–20 m distant objects in three contexts: auditory perception alone, visual one alone, combination of both perceptions (with coherent and incoherent visual and auditory cues). Even though egocentric distance was under-estimated in all contexts, the results showed a higher influence of visual information than auditory information on the perceived distance. Specifically, the bimodal incoherent condition gave perceived distances equivalent to those in the visual-only condition only when the visual target was closer to the subject than the auditory target

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    A loudspeaker-based room auralization system for auditory research

    Get PDF

    Perceptually Driven Interactive Sound Propagation for Virtual Environments

    Get PDF
    Sound simulation and rendering can significantly augment a user‘s sense of presence in virtual environments. Many techniques for sound propagation have been proposed that predict the behavior of sound as it interacts with the environment and is received by the user. At a broad level, the propagation algorithms can be classified into reverberation filters, geometric methods, and wave-based methods. In practice, heuristic methods based on reverberation filters are simple to implement and have a low computational overhead, while wave-based algorithms are limited to static scenes and involve extensive precomputation. However, relatively little work has been done on the psychoacoustic characterization of different propagation algorithms, and evaluating the relationship between scientific accuracy and perceptual benefits.In this dissertation, we present perceptual evaluations of sound propagation methods and their ability to model complex acoustic effects for virtual environments. Our results indicate that scientifically accurate methods for reverberation and diffraction do result in increased perceptual differentiation. Based on these evaluations, we present two novel hybrid sound propagation methods that combine the accuracy of wave-based methods with the speed of geometric methods for interactive sound propagation in dynamic scenes.Our first algorithm couples modal sound synthesis with geometric sound propagation using wave-based sound radiation to perform mode-aware sound propagation. We introduce diffraction kernels of rigid objects,which encapsulate the sound diffraction behaviors of individual objects in the free space and are then used to simulate plausible diffraction effects using an interactive path tracing algorithm. Finally, we present a novel perceptual driven metric that can be used to accelerate the computation of late reverberation to enable plausible simulation of reverberation with a low runtime overhead. We highlight the benefits of our novel propagation algorithms in different scenarios.Doctor of Philosoph

    3D Time-Based Aural Data Representation Using D4 Library’s Layer Based Amplitude Panning Algorithm

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)The following paper introduces a new Layer Based Amplitude Panning algorithm and supporting D4 library of rapid prototyping tools for the 3D time-based data representation using sound. The algorithm is designed to scale and support a broad array of configurations, with particular focus on High Density Loudspeaker Arrays (HDLAs). The supporting rapid prototyping tools are designed to leverage oculocentric strategies to importing, editing, and rendering data, offering an array of innovative approaches to spatial data editing and representation through the use of sound in HDLA scenarios. The ensuing D4 ecosystem aims to address the shortcomings of existing approaches to spatial aural representation of data, offers unique opportunities for furthering research in the spatial data audification and sonification, as well as transportable and scalable spatial media creation and production

    A system for room acoustic simulation for one's own voice

    Get PDF
    The real-time simulation of room acoustical environments for one’s own voice, using generic software, has been difficult until very recently due to the computational load involved: requiring real-time convolution of a person’s voice with a potentially large number of long room impulse responses. This thesis is presenting a room acoustical simulation system with a software-based solution to perform real-time convolutions with headtracking; to simulate the effect of room acoustical environments on the sound of one’s own voice, using binaural technology. In order to gather data to implement headtracking in the system, human head- movements are characterized while reading a text aloud. The rooms that are simulated with the system are actual rooms that are characterized by measuring the room impulse response from the mouth to ears of the same head (oral binaural room impulse response, OBRIR). By repeating this process at 2o increments in the yaw angle on the horizontal plane, the rooms are binaurally scanned around a given position to obtain a collection of OBRIRs, which is then used by the software-based convolution system. In the rooms that are simulated with the system, a person equipped with a near- mouth microphone and near-ear loudspeakers can speak or sing, and hear their voice as it would sound in the measured rooms, while physically being in an anechoic room. By continually updating the person’s head orientation using headtracking, the corresponding OBRIR is chosen for convolution with their voice. The system described in this thesis achieves the low latency that is required to simulate nearby reflections, and it can perform convolution with long room impulse responses. The perceptual validity of the system is studied with two experiments, involving human participants reading aloud a set-text. The system presented in this thesis can be used to design experiments that study the various aspects of the auditory perception of the sound of one’s own voice in room environments. The system can also be adapted to incorporate a module that enables listening to the sound of one’s own voice in commercial applications such as architectural acoustic room simulation software, teleconferencing systems, virtual reality and gaming applications, etc
    • 

    corecore