3,410 research outputs found

    Effects of virtual acoustics on dynamic auditory distance perception

    Get PDF
    Sound propagation encompasses various acoustic phenomena including reverberation. Current virtual acoustic methods, ranging from parametric filters to physically-accurate solvers, can simulate reverberation with varying degrees of fidelity. We investigate the effects of reverberant sounds generated using different propagation algorithms on acoustic distance perception, i.e., how faraway humans perceive a sound source. In particular, we evaluate two classes of methods for real-time sound propagation in dynamic scenes based on parametric filters and ray tracing. Our study shows that the more accurate method shows less distance compression as compared to the approximate, filter-based method. This suggests that accurate reverberation in VR results in a better reproduction of acoustic distances. We also quantify the levels of distance compression introduced by different propagation methods in a virtual environment.Comment: 8 Pages, 7 figure

    CPX based synthesis for binaural auralization of vehicle rolling noise to an arbitrary positioned stander-by receiver

    Get PDF
    Virtual reality is becoming an important tool for studying the interaction between pedestrians and road vehicles, by allowing the analysis of potentially hazard situations without placing subjects in real risk. However, most of the current simulators are unable to accurately recreate traffic sounds that are congruent with the visual scene. This has been recognized as a fault in the virtual audio-visual scenarios used in such contexts. This study proposes a method for delivering a binaural auralization of the noise generated by a moving vehicle to an arbitrarily located moving listener (pedestrian). Building on previously developed methods, the proposal presented here integrates in a novel way a dynamic auralization engine, thus enabling real-time update of the acoustic cues in the binaural signal delivered via headphones. Furthermore, the proposed auralization routine uses Close ProXimity (CPX) tyre-road noise signal as sound source input, facilitating the quick interchangeability of source signals, and easing the noise collection procedure. Two validation experiments were carried out, one to quantitatively compare field signals with CPX-derived virtual signal recordings, and another to assess these same signals through psychoacoustic models. The latter aims to assure that the reproduction of the synthesized signal is perceptually similar to one occurring on pedestrian/vehicle interactions during situations of street crossing. Discrepancies were detected, and emphasized when the vehicle is within close distance from the receiver (pedestrian). However, the analysis indicated that these pose no hindrance to the study of vehicle–pedestrian interaction. Improvements to the method are identified and further developments are proposed.This work was supported by the ‘‘Fundação para a Ciência e a Tecnologia” [PTDC/ECM-TRA/3568/2014, SFRH/BD/131638/2017, UIDB/04029/2020] This work is part of the activities of the research project AnPeB – ‘‘ANalysis of PEdestrians Behaviour based on simulated urban environments and its incorporation in risk modelling” (PTDC/ECM TRA/3568/2014), funded by the ‘‘Promover a Produção Científica e Desenvolvimento Tecnológico e a Constituição de Redes Temáti cas” (3599-PPCDT) project and supported by the ‘‘European Com munity Fund FEDER” and the doctoral scholarship SFRH/ BD/131638/2017, funded by ‘‘Fundação para a Ciência e a Tecnolo gia (FCT)”

    Auralization of Air Vehicle Noise for Community Noise Assessment

    Get PDF
    This paper serves as an introduction to air vehicle noise auralization and documents the current state-of-the-art. Auralization of flyover noise considers the source, path, and receiver as part of a time marching simulation. Two approaches are offered; a time domain approach performs synthesis followed by propagation, while a frequency domain approach performs propagation followed by synthesis. Source noise description methods are offered for isolated and installed propulsion system and airframe noise sources for a wide range of air vehicles. Methods for synthesis of broadband, discrete tones, steady and unsteady periodic, and a periodic sources are presented, and propagation methods and receiver considerations are discussed. Auralizations applied to vehicles ranging from large transport aircraft to small unmanned aerial systems demonstrate current capabilities

    Psychophysical Evaluation of Three-Dimensional Auditory Displays

    Get PDF
    This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system

    Messaging in mobile augmented reality audio

    Get PDF
    Monen käyttäjän välinen asynkroninen viestintä tapahtuu tyypillisesti tekstiä käyttäen. Mobiileissa käyttötilanteissa tekstinsyöttö voi kuitenkin olla hidasta ja vaivalloista. Sekä viestien kirjoittaminen että lukeminen vaatii huomion keskittämistä laitteen näyttöön. Tässä työssä kehitettiin viestintäsovellus, jossa tekstin sijaan käytetään puhetta lyhyiden viestien jakamiseen ryhmien jäsenten välillä. Näitä viestejä voidaan kuunnella heti niiden saapuessa tai niitä voi selata ja kuunnella myöhemmin. Sovellusta on tarkoitettu käytettävän mobiilin lisätyn äänitodellisuuden alustan kanssa, mikä mahdollistaa lähes häiriintymättömän ympäristön havaitsemisen samalla kun kommunikoi ääniviestien avulla. Pieni ryhmä käyttäjiä testasi sovellusta pöytätietokoneilla ja kannettavilla tietokoneilla. Yksi isoimmista eduista tekstipohjaiseen viestintään verrattuna todettiin olevan puheen mukana välittyvä ylimääräinen tieto verrattuna samaan kirjoitettuun viestiin, puheviestinnän ollessa paljon ilmeikkäämpää. Huonoja puolia verrattuna tekstipohjaiseen viestintään olivat hankaluus selata vanhojen viestien läpi sekä vaikeus osallistua useampaan keskusteluun samaan aikaan.Asynchronous multi-user communication is typically done using text. In the context of mobile use text input can, however, be slow and cumbersome, and attention on the display of the device is required both when writing and reading messages. A messaging application was developed to test the concept of sharing short messages between members of groups using recorded speech rather than text. These messages can be listened to as they arrive, or browsed through and listened to later. The application is intended to be used on a mobile augmented reality audio platform, allowing almost undisturbed perception of and interaction with the surrounding environment while communicating using audio messages. A small group of users tested the application on desktop and laptop computers. The users found one of the biggest advantages over text-based communication to be the additional information associated with a spoken message, being much more expressive than the same written message. Compared with text chats, the users thought it was difficult to quickly browse through old messages and confusing to participate in several discussions at the same time

    Auditory Spatial Layout

    Get PDF
    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving

    Head-Related Transfer Functions and Virtual Auditory Display

    Get PDF
    corecore