113 research outputs found

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    Shaping the auditory peripersonal space with motor planning in immersive virtual reality

    Get PDF
    Immersive audio technologies require personalized binaural synthesis through headphones to provide perceptually plausible virtual and augmented reality (VR/AR) simulations. We introduce and apply for the first time in VR contexts the quantitative measure called premotor reaction time (pmRT) for characterizing sonic interactions between humans and the technology through motor planning. In the proposed basic virtual acoustic scenario, listeners are asked to react to a virtual sound approaching from different directions and stopping at different distances within their peripersonal space (PPS). PPS is highly sensitive to embodied and environmentally situated interactions, anticipating the motor system activation for a prompt preparation for action. Since immersive VR applications benefit from spatial interactions, modeling the PPS around the listeners is crucial to reveal individual behaviors and performances. Our methodology centered around the pmRT is able to provide a compact description and approximation of the spatiotemporal PPS processing and boundaries around the head by replicating several well-known neurophysiological phenomena related to PPS, such as auditory asymmetry, front/back calibration and confusion, and ellipsoidal action fields

    Interactions in Mobile Sound and Music Computing

    Get PDF
    none4siopenGeronazzo M.; Avanzini F.; Fontana F.; Serafin S.Geronazzo, M.; Avanzini, F.; Fontana, F.; Serafin, S

    Classifying non-individual head-related transfer functions with a computational auditory model: calibration and metrics

    Get PDF
    This study explores the use of a multi-feature Bayesian auditory sound localisation model to classify non-individual head-related transfer functions (HRTFs). Based on predicted sound localisation performance, these are grouped into ‘good’ and ‘bad’, and the ‘best’/‘worst’ is selected from each category. Firstly, we present a greedy algorithm for automated individual calibration of the model based on the individual sound localisation data. We then discuss data analysis of predicted directional localisation errors and present an algorithm for categorising the HRTFs based on the localisation error distributions within a limited range of directions in front of the listener. Finally, we discuss the validity of the classification algorithm when using averaged instead of individual model parameters. This analysis of auditory modelling results aims to provide a perceptual foundation for automated HRTF personalisation techniques for an improved experience of binaural spatial audio technologies

    AN ACTIVE LEARNING PROCEDURE FOR THE INTERAURAL TIME DIFFERENCE DISCRIMINATION THRESHOLD

    Get PDF
    Measuring the auditory lateralization elicited by interaural time difference (ITD) cues involves the estimation of a psychometric function (PF). The shape of this function usually follows from the analysis of the subjective data and models the probability of correctly localizing the angular position of a sound source. The present study describes and evaluates a procedure for progressively fitting a PF, using Gaussian process classification of the subjective responses produced during a binary decision experiment. The process refines adaptively an approximated PF, following Bayesian inference. At each trial, it suggests the most informative auditory stimulus for function refinement according to Bayesian active learning by disagreement (BALD) mutual information. In this paper, the procedure was modified to accommodate two-alternative forced choice (2AFC) experimental methods and then was compared with a standard adaptive “three-down, one-up” staircase procedure. Our process approximates the average threshold ITD 79.4% correct level of lateralization with a mean accuracy increase of 8.9% over the Weibull function fitted on the data of the same test. The final accuracy for the Just Noticeable Difference (JND) in ITD is achieved with only 37.6% of the trials needed by a standard lateralization test

    Localization in elevation with non-individual head-related transfer functions: Comparing predictions of two auditory models

    Get PDF
    This paper explores the limits of human localization of sound sources when listening with non-individual Head-Related Transfer Functions (HRTFs), by simulating performances of a localization task in the mid-sagittal plane. Computational simulations are performed with the CIPIC HRTF database using two different auditory models which mimic human hearing processing from a functional point of view. Our methodology investigates the opportunity of using virtual experiments instead of time- and resource- demanding psychoacoustic tests, which could also lead to potentially unreliable results. Four different perceptual metrics were implemented in order to identify relevant differences between auditory models in a selection problem of best-available non-individual HRTFs. Results report a high correlation between the two models denoting an overall similar trend, however, we discuss discrepancies in the predictions which should be carefully considered for the applicability of our methodology to the HRTF selection problem

    Structural Modeling of Pinna-Related Transfer Functions for 3-D Sound Rendering

    Get PDF
    This paper considers the general problem of modeling pinna-related transfer functions (PRTFs) for 3-D sound rendering. Following a structural approach, we present an algorithm for the decomposition of PRTFs into ear resonances and frequency notches due to reflections over pinna cavities and exploit it in order to deliver a method to extract the frequencies of the most important spectral notches. Ray-tracing analysis reveals a convincing correspondence between extracted frequencies and pinna cavities of a bunch of subjects. We then propose a model for PRTF synthesis which allows to control separately the evolution of resonances and spectral notches through the design of two distinct filter blocks. The resulting model is suitable for future integration into a structural head-related transfer function model, and for parametrization over anthropometrical measurements of a wide range of subjects

    Audio 3D e ancoraggio sonoro per l'esplorazione multimodale di ambienti virtuali

    Get PDF
    Questo lavoro presenta un sistema interattivo audio-aptico di ausilio all\u2019orientamento e alla mobilita per soggetti non ` vedenti, e un esperimento soggettivo volto a studiare i meccanismi cognitivi nella rappresentazione spaziale in assenza di informazione visuale. Si presenta in particolare un esperimento di riconoscimento di oggetti, che investiga il ruolo dell\u2019informazione auditiva spaziale dinamica integrata con feedback aptico, in un semplice ambiente virtuale. Tale informazione e strutturata come una \u201cancora ` sonora\u201d, erogata in cuffia tramite tecniche di rendering 3D binaurale del suono (in particolare tramite Head-Related Transfer Functions, o HRTF, opportunamente personalizzate). I risultati sperimentali relativi al tempo di riconoscimento da parte dei soggetti mostrano una relazione tra la posizione dell\u2019ancora sonora e la forma dell\u2019oggetto riconosciuto. Inoltre, un\u2019analisi qualitativa delle traiettorie di esplorazione suggerisce l\u2019insorgere di modifiche comportamentali tra le condizioni monomodale e multimodale

    Model-based customized binaural reproduction through headphones

    Get PDF
    Generalized head-related transfer functions (HRTFs) represent a cheap and straightforward mean of providing 3D rendering in headphone reproduction. However, they are known to produce evident sound localization errors, including incorrect perception of elevation, front-back reversals, and lack of externalization, especially when head tracking is not utilized in the reproduction . Therefore, individual anthropometric features have a key role in characterizing HRTFs. On the other hand, HRTF measurements on a significant number of subjects are both expensive and inconvenient. This short paper briefly presents a structural HRTF model that, if properly rendered through a proposed hardware (wireless headphones augmented with motion and vision sensors), can be used for an efficient and immersive sound reproduction. Special care is reserved to the contribution of the external ear to the HRTF: data and results collected to date by the authors allow parametrization of the model according to individual anthropometric data, which in turn can be automatically estimated through straightforward image analysis. The proposed hardware and software can be used to render scenes with multiple audiovisual objects in a number of contexts such as computer games, cinema, edutainment, and many others
    • 

    corecore