254 research outputs found
Effects of virtual acoustics on dynamic auditory distance perception
Sound propagation encompasses various acoustic phenomena including
reverberation. Current virtual acoustic methods, ranging from parametric
filters to physically-accurate solvers, can simulate reverberation with varying
degrees of fidelity. We investigate the effects of reverberant sounds generated
using different propagation algorithms on acoustic distance perception, i.e.,
how faraway humans perceive a sound source. In particular, we evaluate two
classes of methods for real-time sound propagation in dynamic scenes based on
parametric filters and ray tracing. Our study shows that the more accurate
method shows less distance compression as compared to the approximate,
filter-based method. This suggests that accurate reverberation in VR results in
a better reproduction of acoustic distances. We also quantify the levels of
distance compression introduced by different propagation methods in a virtual
environment.Comment: 8 Pages, 7 figure
Perceptually Driven Interactive Sound Propagation for Virtual Environments
Sound simulation and rendering can significantly augment a user‘s sense of presence in virtual environments. Many techniques for sound propagation have been proposed that predict the behavior of sound as it interacts with the environment and is received by the user. At a broad level, the propagation algorithms can be classified into reverberation filters, geometric methods, and wave-based methods. In practice, heuristic methods based on reverberation filters are simple to implement and have a low computational overhead, while wave-based algorithms are limited to static scenes and involve extensive precomputation. However, relatively little work has been done on the psychoacoustic characterization of different propagation algorithms, and evaluating the relationship between scientific accuracy and perceptual benefits.In this dissertation, we present perceptual evaluations of sound propagation methods and their ability to model complex acoustic effects for virtual environments. Our results indicate that scientifically accurate methods for reverberation and diffraction do result in increased perceptual differentiation. Based on these evaluations, we present two novel hybrid sound propagation methods that combine the accuracy of wave-based methods with the speed of geometric methods for interactive sound propagation in dynamic scenes.Our first algorithm couples modal sound synthesis with geometric sound propagation using wave-based sound radiation to perform mode-aware sound propagation. We introduce diffraction kernels of rigid objects,which encapsulate the sound diffraction behaviors of individual objects in the free space and are then used to simulate plausible diffraction effects using an interactive path tracing algorithm. Finally, we present a novel perceptual driven metric that can be used to accelerate the computation of late reverberation to enable plausible simulation of reverberation with a low runtime overhead. We highlight the benefits of our novel propagation algorithms in different scenarios.Doctor of Philosoph
Spatial Sound Rendering – A Survey
Simulating propagation of sound and audio rendering can improve the sense of realism and the immersion both in complex acoustic environments and dynamic virtual scenes. In studies of sound auralization, the focus has always been on room acoustics modeling, but most of the same methods are also applicable in the construction of virtual environments such as those developed to facilitate computer gaming, cognitive research, and simulated training scenarios. This paper is a review of state-of-the-art techniques that are based on acoustic principles that apply not only to real rooms but also in 3D virtual environments. The paper also highlights the need to expand the field of immersive sound in a web based browsing environment, because, despite the interest and many benefits, few developments seem to have taken place within this context. Moreover, the paper includes a list of the most effective algorithms used for modelling spatial sound propagation and reports their advantages and disadvantages. Finally, the paper emphasizes in the evaluation of these proposed works
Interactive Sound Propagation using Precomputation and Statistical Approximations
Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques -- Ambient Reverberance, and Aural Proxies -- to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.Doctor of Philosoph
Listen2Scene: Interactive material-aware binaural sound propagation for reconstructed 3D scenes
We present an end-to-end binaural audio rendering approach (Listen2Scene) for
virtual reality (VR) and augmented reality (AR) applications. We propose a
novel neural-network-based binaural sound propagation method to generate
acoustic effects for 3D models of real environments. Any clean audio or dry
audio can be convolved with the generated acoustic effects to render audio
corresponding to the real environment. We propose a graph neural network that
uses both the material and the topology information of the 3D scenes and
generates a scene latent vector. Moreover, we use a conditional generative
adversarial network (CGAN) to generate acoustic effects from the scene latent
vector. Our network is able to handle holes or other artifacts in the
reconstructed 3D mesh model. We present an efficient cost function to the
generator network to incorporate spatial audio effects. Given the source and
the listener position, our learning-based binaural sound propagation approach
can generate an acoustic effect in 0.1 milliseconds on an NVIDIA GeForce RTX
2080 Ti GPU and can easily handle multiple sources. We have evaluated the
accuracy of our approach with binaural acoustic effects generated using an
interactive geometric sound propagation algorithm and captured real acoustic
effects. We also performed a perceptual evaluation and observed that the audio
rendered by our approach is more plausible as compared to audio rendered using
prior learning-based sound propagation algorithms.Comment: Project page: https://anton-jeran.github.io/Listen2Scene
- …