251,337 research outputs found

    Localization and Rendering of Sound Sources in Acoustic Fields

    Get PDF
    Disertační práce se zabývá lokalizací zdrojů zvuku a akustickým zoomem. Hlavním cílem této práce je navrhnout systém s akustickým zoomem, který přiblíží zvuk jednoho mluvčího mezi skupinou mluvčích, a to i když mluví současně. Tento systém je kompatibilní s technikou prostorového zvuku. Hlavní přínosy disertační práce jsou následující: 1. Návrh metody pro odhad více směrů přicházejícího zvuku. 2. Návrh metody pro akustické zoomování pomocí DirAC. 3. Návrh kombinovaného systému pomocí předchozích kroků, který může být použit v telekonferencích.This doctoral thesis deals with sound source localization and acoustic zooming. The primary goal of this dissertation is to design an acoustic zooming system, which can zoom the sound of one speaker among multiple speakers even when they speak simultaneously. The system is compatible with surround sound techniques. In particular, the main contributions of the doctoral thesis are as follows: 1. Design of a method for multiple sound directions estimations. 2. Proposing a method for acoustic zooming using DirAC. 3. Design a combined system using the previous mentioned steps, which can be used in teleconferencing.

    Spatial Sound Rendering – A Survey

    Get PDF
    Simulating propagation of sound and audio rendering can improve the sense of realism and the immersion both in complex acoustic environments and dynamic virtual scenes. In studies of sound auralization, the focus has always been on room acoustics modeling, but most of the same methods are also applicable in the construction of virtual environments such as those developed to facilitate computer gaming, cognitive research, and simulated training scenarios. This paper is a review of state-of-the-art techniques that are based on acoustic principles that apply not only to real rooms but also in 3D virtual environments. The paper also highlights the need to expand the field of immersive sound in a web based browsing environment, because, despite the interest and many benefits, few developments seem to have taken place within this context. Moreover, the paper includes a list of the most effective algorithms used for modelling spatial sound propagation and reports their advantages and disadvantages. Finally, the paper emphasizes in the evaluation of these proposed works

    VR-based Soundscape Evaluation: Auralising the Sound from Audio Rendering, Reflection Modelling to Source Synthesis in the Acoustic Environment

    Get PDF
    Soundscape has been growing as a research field associated with acoustics, urban planning, environmental psychology and other disciplines since it was first introduced in the 1960s. To assess soundscapes, subjective validation is frequently integrated with soundscape reproduction. However, the existing soundscape standards do not give clear reproduction specifications to recreate a virtual sound environment. Selecting appropriate audio rendering methods, simulating sound propagation, and synthesising non-point sound sources remain major challenges for researchers. This thesis therefore attempts to give alternative or simplified strategies to reproduce a virtual sound environment by suggesting binaural or monaural audio renderings, reflection modelling during sound propagation, and less synthesis points of non-point sources. To solve these unclear issues, a systematic review of original studies first examines the ecological validity of immersive virtual reality in soundscape evaluation. Through recording and reproducing audio-visual stimuli of sound environments, participants give their subjective responses according to the structured questionnaires. Thus, different audio rendering, reflection modelling, and source synthesis methods are validated by subjective evaluation. The results of this thesis reveal that a rational setup of VR techniques and evaluation methods will be a solid foundation for soundscape evaluation with reliable ecological validity. For soundscape audio rendering, the binaural rendering still dominates the soundscape evaluation compared with the monaural. For sound propagation with consideration of different reflection conditions, fewer orders can be employed during sound reflection to assess different kinds of sounds in outdoor sound environments through VR experiences. The VR experience combining both HMDs and Ambisonics will significantly strengthen our immersion at low orders. For non-point source synthesis, especially line sources, when adequate synthesis points reach the threshold of the minimum audible angle, human ears cannot distinguish the location of the synthesised sound sources in the horizontal plane, thus increasing immersion significantly. These minimum specifications and simplifications refine the understanding of soundscape reproduction, and the findings will be beneficial for researchers and engineers in determining appropriate audio rendering, sound propagation modelling, and non-point source synthesis strategies

    3D Time-Based Aural Data Representation Using D4 Library’s Layer Based Amplitude Panning Algorithm

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)The following paper introduces a new Layer Based Amplitude Panning algorithm and supporting D4 library of rapid prototyping tools for the 3D time-based data representation using sound. The algorithm is designed to scale and support a broad array of configurations, with particular focus on High Density Loudspeaker Arrays (HDLAs). The supporting rapid prototyping tools are designed to leverage oculocentric strategies to importing, editing, and rendering data, offering an array of innovative approaches to spatial data editing and representation through the use of sound in HDLA scenarios. The ensuing D4 ecosystem aims to address the shortcomings of existing approaches to spatial aural representation of data, offers unique opportunities for furthering research in the spatial data audification and sonification, as well as transportable and scalable spatial media creation and production

    Multi-Modal Perception for Selective Rendering

    Get PDF
    A major challenge in generating high-fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high-fidelity simulation of light and sound is still unachievable in real-time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high-fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialised directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilised in selective rendering pipelines via the use of multi-modal maps. The multi-modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi-modal virtual environments
    • …
    corecore