390 research outputs found

    Efficient Techniques for Wave-based Sound Propagation in Interactive Applications

    Get PDF
    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.Doctor of Philosoph

    Perceptually Driven Interactive Sound Propagation for Virtual Environments

    Get PDF
    Sound simulation and rendering can significantly augment a user‘s sense of presence in virtual environments. Many techniques for sound propagation have been proposed that predict the behavior of sound as it interacts with the environment and is received by the user. At a broad level, the propagation algorithms can be classified into reverberation filters, geometric methods, and wave-based methods. In practice, heuristic methods based on reverberation filters are simple to implement and have a low computational overhead, while wave-based algorithms are limited to static scenes and involve extensive precomputation. However, relatively little work has been done on the psychoacoustic characterization of different propagation algorithms, and evaluating the relationship between scientific accuracy and perceptual benefits.In this dissertation, we present perceptual evaluations of sound propagation methods and their ability to model complex acoustic effects for virtual environments. Our results indicate that scientifically accurate methods for reverberation and diffraction do result in increased perceptual differentiation. Based on these evaluations, we present two novel hybrid sound propagation methods that combine the accuracy of wave-based methods with the speed of geometric methods for interactive sound propagation in dynamic scenes.Our first algorithm couples modal sound synthesis with geometric sound propagation using wave-based sound radiation to perform mode-aware sound propagation. We introduce diffraction kernels of rigid objects,which encapsulate the sound diffraction behaviors of individual objects in the free space and are then used to simulate plausible diffraction effects using an interactive path tracing algorithm. Finally, we present a novel perceptual driven metric that can be used to accelerate the computation of late reverberation to enable plausible simulation of reverberation with a low runtime overhead. We highlight the benefits of our novel propagation algorithms in different scenarios.Doctor of Philosoph

    Space station interior noise analysis program

    Get PDF
    Documentation is provided for a microcomputer program which was developed to evaluate the effect of the vibroacoustic environment on speech communication inside a space station. The program, entitled Space Station Interior Noise Analysis Program (SSINAP), combines a Statistical Energy Analysis (SEA) prediction of sound and vibration levels within the space station with a speech intelligibility model based on the Modulation Transfer Function and the Speech Transmission Index (MTF/STI). The SEA model provides an effective analysis tool for predicting the acoustic environment based on proposed space station design. The MTF/STI model provides a method for evaluating speech communication in the relatively reverberant and potentially noisy environments that are likely to occur in space stations. The combinations of these two models provides a powerful analysis tool for optimizing the acoustic design of space stations from the point of view of speech communications. The mathematical algorithms used in SSINAP are presented to implement the SEA and MTF/STI models. An appendix provides an explanation of the operation of the program along with details of the program structure and code

    Virtual acoustic rendering by state wave synthesis

    Get PDF
    International audienceIn the context of the class of virtual acoustic simulation techniques that rely on traveling wave rendering as dictated by path-tracing methods (e.g, image-source, ray-tracing, beam-tracing) we introduce State Wave Synthesis (SWS), a novel framework for the efficient rendering of sound traveling waves as exchanged between multiple directional sound sources and multiple directional sound receivers in time-varying conditions.The proposed virtual acoustic rendering framework represents sound-emitting and sound-receiving objects as multiple-input, multiple-output dynamical systems. Each input or output corresponds to a sound traveling wave received or emitted by the object from/to different orientations or at/from different positions of the object. To allow for multiple arriving/departing waves from/to different orientations and/or positions of an object in dynamic conditions, we introduce a discrete-time state-space system formulation that allows the inputs or the outputs of a system to mutate dynamically. The SWS framework treats virtual source or receiver objects as time-varying dynamical systems in state-space modal form, each allowing for an unlimited number of sound traveling wave inputs and outputs.To model the sound emission and/or reception behavior of an object, data may be collected from measurements. These measurements, which may comprise real or virtual impulse or frequency responses from a real physical object or a numerical physical model of an object, are jointly processed to design a multiple-input, multiple-output state-space model with mutable inputs and/or outputs. This mutable state-space model enables the simulation of direction- and/or position-dependent, frequency-dependent sound wave emission or reception of the object. At run-time, each of the mutable state-space object models may present any number of inputs or outputs, with each input or output associated to a received/emitted sound traveling wave from/to specific arrival/departure position or orientation. In a first formulation, the sound wave form, the traveling of sound waves between object models is simulated by means of delay lines of time-varying length. In a second formulation, the state wave form, the traveling of sound waves between object models is simulated by way of propagating the state variables of source objects along delay lines of time-varying length. SWS allows the accurate simulation of frequency-dependent source directivity and receiver directivity in time-varying conditions without any time-domain or frequency-domain explicit convolution processing. In addition, the framework enables time-varying, obstacle-induced frequency-dependent attenuation of traveling waves without any dedicated digital filters. SWS facilitates the implementation of efficient virtual acoustic rendering engines either as software or in dedicated hardware, allowing realizations in which the number of delay lines is independent of the number of traveling wave paths being simulated. Moreover, the method enables a straightforward dynamic coupling between virtual acoustic objects and their physics-based simulation counterparts as performed by computer for animation, virtual reality, video-games, music synthesis, or other applications.In this presentation we will introduce the foundations of SWS and employ a real acoustic violin and a real human head as illustrative examples for a source object and a receiver object respectively. In light of available implementation possibilities, we will examine the basic memory requirements and computational cost of the rendering framework and suggest how to conveniently include minimum-phase diffusive elements to procure additional diffuse field contributions if necessary. Finally, we will expose limitations and discuss future opportunities for development

    Auralization of Air Vehicle Noise for Community Noise Assessment

    Get PDF
    This paper serves as an introduction to air vehicle noise auralization and documents the current state-of-the-art. Auralization of flyover noise considers the source, path, and receiver as part of a time marching simulation. Two approaches are offered; a time domain approach performs synthesis followed by propagation, while a frequency domain approach performs propagation followed by synthesis. Source noise description methods are offered for isolated and installed propulsion system and airframe noise sources for a wide range of air vehicles. Methods for synthesis of broadband, discrete tones, steady and unsteady periodic, and a periodic sources are presented, and propagation methods and receiver considerations are discussed. Auralizations applied to vehicles ranging from large transport aircraft to small unmanned aerial systems demonstrate current capabilities

    Efficient Interactive Sound Propagation in Dynamic Environments

    Get PDF
    The physical phenomenon of sound is ubiquitous in our everyday life and is an important component of immersion in interactive virtual reality applications. Sound propagation involves modeling how sound is emitted from a source, interacts with the environment, and is received by a listener. Previous techniques for computing interactive sound propagation in dynamic scenes are based on geometric algorithms such as ray tracing. However, the performance and quality of these algorithms is strongly dependent on the number of rays traced. In addition, it is difficult to acquire acoustic material properties. It is also challenging to efficiently compute spatial sound effects from the output of ray tracing-based sound propagation. These problems lead to increased latency and less plausible sound in dynamic interactive environments. In this dissertation, we propose three approaches with the goal of addressing these challenges. First, we present an approach that utilizes temporal coherence in the sound field to reuse computation from previous simulation time steps. Secondly, we present a framework for the automatic acquisition of acoustic material properties using visual and audio measurements of real-world environments. Finally, we propose efficient techniques for computing directional spatial sound for sound propagation with low latency using head-related transfer functions (HRTF). We have evaluated both the performance and subjective impact of these techniques on a variety of complex dynamic indoor and outdoor environments and observe an order-of-magnitude speedup over previous approaches. The accuracy of our approaches has been validated against real-world measurements and previous methods. The proposed techniques enable interactive simulation of sound propagation in complex multi-source dynamic environments.Doctor of Philosoph

    Spatial sound for computer games and virtual reality

    Get PDF
    In this chapter, we discuss spatial sound within the context of Virtual Reality and other synthetic environments such as computer games. We review current audio technologies, sound constraints within immersive multi-modal spaces, and future trends. The review process takes into consideration the wide-varying levels of audio sophistication in the gaming and VR industries, ranging from standard stereo output to Head Related Transfer Function implementation. The level of sophistication is determined mostly by hardware/system constraints (such as mobile devices or network limitations), however audio practitioners are developing novel and diverse methods to overcome many of these challenges. No matter what approach is employed, the primary objectives are very similar—the enhancement of the virtual scene and the enrichment of the user experience. We discuss how successful various audio technologies are in achieving these objectives, how they fall short, and how they are aligned to overcome these shortfalls in future implementations

    Parametrization, auralization, and authoring of room acoustics for virtual reality applications

    Get PDF
    The primary goal of this work has been to develop means to represent acoustic properties of an environment with a set of spatial sound related parameters. These parameters are used for creating virtual environments, where the sounds are expected to be perceived by the user as if they were listened to in a corresponding real space. The virtual world may consist of both visual and audio components. Ideally in such an application, the sound and the visual parts of the virtual scene are in coherence with each other, which should improve the user immersion in the virtual environment. The second aim was to verify the feasibility of the created sound environment parameter set in practice. A virtual acoustic modeling system was implemented, where any spatial sound scene, defined by using the developed parameters, can be rendered audible in real time. In other words the user can listen to the auralized sound according to the defined sound scene parameters. Thirdly, the authoring of creating such parametric sound scene representations was addressed. In this authoring framework, sound scenes and an associated visual scene can be created to be further encoded and transmitted in real time to a remotely located renderer. The visual scene counterpart was created as a part of the multimedia scene acting simultaneously as a user interface for renderer-side interaction.reviewe
    • …
    corecore