1,856 research outputs found
Co-Localization of Audio Sources in Images Using Binaural Features and Locally-Linear Regression
This paper addresses the problem of localizing audio sources using binaural
measurements. We propose a supervised formulation that simultaneously localizes
multiple sources at different locations. The approach is intrinsically
efficient because, contrary to prior work, it relies neither on source
separation, nor on monaural segregation. The method starts with a training
stage that establishes a locally-linear Gaussian regression model between the
directional coordinates of all the sources and the auditory features extracted
from binaural measurements. While fixed-length wide-spectrum sounds (white
noise) are used for training to reliably estimate the model parameters, we show
that the testing (localization) can be extended to variable-length
sparse-spectrum sounds (such as speech), thus enabling a wide range of
realistic applications. Indeed, we demonstrate that the method can be used for
audio-visual fusion, namely to map speech signals onto images and hence to
spatially align the audio and visual modalities, thus enabling to discriminate
between speaking and non-speaking faces. We release a novel corpus of real-room
recordings that allow quantitative evaluation of the co-localization method in
the presence of one or two sound sources. Experiments demonstrate increased
accuracy and speed relative to several state-of-the-art methods.Comment: 15 pages, 8 figure
Acoustic Space Learning for Sound Source Separation and Localization on Binaural Manifolds
In this paper we address the problems of modeling the acoustic space
generated by a full-spectrum sound source and of using the learned model for
the localization and separation of multiple sources that simultaneously emit
sparse-spectrum sounds. We lay theoretical and methodological grounds in order
to introduce the binaural manifold paradigm. We perform an in-depth study of
the latent low-dimensional structure of the high-dimensional interaural
spectral data, based on a corpus recorded with a human-like audiomotor robot
head. A non-linear dimensionality reduction technique is used to show that
these data lie on a two-dimensional (2D) smooth manifold parameterized by the
motor states of the listener, or equivalently, the sound source directions. We
propose a probabilistic piecewise affine mapping model (PPAM) specifically
designed to deal with high-dimensional data exhibiting an intrinsic piecewise
linear structure. We derive a closed-form expectation-maximization (EM)
procedure for estimating the model parameters, followed by Bayes inversion for
obtaining the full posterior density function of a sound source direction. We
extend this solution to deal with missing data and redundancy in real world
spectrograms, and hence for 2D localization of natural sound sources such as
speech. We further generalize the model to the challenging case of multiple
sound sources and we propose a variational EM framework. The associated
algorithm, referred to as variational EM for source separation and localization
(VESSL) yields a Bayesian estimation of the 2D locations and time-frequency
masks of all the sources. Comparisons of the proposed approach with several
existing methods reveal that the combination of acoustic-space learning with
Bayesian inference enables our method to outperform state-of-the-art methods.Comment: 19 pages, 9 figures, 3 table
Direction of Arrival with One Microphone, a few LEGOs, and Non-Negative Matrix Factorization
Conventional approaches to sound source localization require at least two
microphones. It is known, however, that people with unilateral hearing loss can
also localize sounds. Monaural localization is possible thanks to the
scattering by the head, though it hinges on learning the spectra of the various
sources. We take inspiration from this human ability to propose algorithms for
accurate sound source localization using a single microphone embedded in an
arbitrary scattering structure. The structure modifies the frequency response
of the microphone in a direction-dependent way giving each direction a
signature. While knowing those signatures is sufficient to localize sources of
white noise, localizing speech is much more challenging: it is an ill-posed
inverse problem which we regularize by prior knowledge in the form of learned
non-negative dictionaries. We demonstrate a monaural speech localization
algorithm based on non-negative matrix factorization that does not depend on
sophisticated, designed scatterers. In fact, we show experimental results with
ad hoc scatterers made of LEGO bricks. Even with these rudimentary structures
we can accurately localize arbitrary speakers; that is, we do not need to learn
the dictionary for the particular speaker to be localized. Finally, we discuss
multi-source localization and the related limitations of our approach.Comment: This article has been accepted for publication in IEEE/ACM
Transactions on Audio, Speech, and Language processing (TASLP
Effects of virtual acoustics on dynamic auditory distance perception
Sound propagation encompasses various acoustic phenomena including
reverberation. Current virtual acoustic methods, ranging from parametric
filters to physically-accurate solvers, can simulate reverberation with varying
degrees of fidelity. We investigate the effects of reverberant sounds generated
using different propagation algorithms on acoustic distance perception, i.e.,
how faraway humans perceive a sound source. In particular, we evaluate two
classes of methods for real-time sound propagation in dynamic scenes based on
parametric filters and ray tracing. Our study shows that the more accurate
method shows less distance compression as compared to the approximate,
filter-based method. This suggests that accurate reverberation in VR results in
a better reproduction of acoustic distances. We also quantify the levels of
distance compression introduced by different propagation methods in a virtual
environment.Comment: 8 Pages, 7 figure
Studies on binaural and monaural signal analysis methods and applications
Sound signals can contain a lot of information about the environment and the sound sources present in it. This thesis presents novel contributions to the analysis of binaural and monaural sound signals. Some new applications are introduced in this work, but the emphasis is on analysis methods. The three main topics of the thesis are computational estimation of sound source distance, analysis of binaural room impulse responses, and applications intended for augmented reality audio.
A novel method for binaural sound source distance estimation is proposed. The method is based on learning the coherence between the sounds entering the left and right ears. Comparisons to an earlier approach are also made. It is shown that these kinds of learning methods can correctly recognize the distance of a speech sound source in most cases.
Methods for analyzing binaural room impulse responses are investigated. These methods are able to locate the early reflections in time and also to estimate their directions of arrival. This challenging problem could not be tackled completely, but this part of the work is an important step towards accurate estimation of the individual early reflections from a binaural room impulse response.
As the third part of the thesis, applications of sound signal analysis are studied. The most notable contributions are a novel eyes-free user interface controlled by finger snaps, and an investigation on the importance of features in audio surveillance.
The results of this thesis are steps towards building machines that can obtain information on the surrounding environment based on sound. In particular, the research into sound source distance estimation functions as important basic research in this area. The applications presented could be valuable in future telecommunications scenarios, such as augmented reality audio
Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019
International audienc
On the plausibility of simplified acoustic room representations for listener translation in dynamic binaural auralizations
Diese Doktorarbeit untersucht die Wahrnehmung vereinfachter akustischer Raumrepräsentationen in positionsdynamischer Binauralwiedergabe für die Hörertranslation. Die dynamische Binauralsynthese ist eine Audiowiedergabemethode zur Erzeugung räumlicher auditiver Illusionen über Kopfhörer für virtuelle, erweiterte und gemischte Realität (VR/AR/MR). Dabei ist es nun eine typische Anforderung, immersive Inhalte in sechs Freiheitsgraden (6DOF) zu erkunden. Dynamische binaurale Schallfeldimitationen mit hoher physikalischer Genauigkeit zu realisieren, ist meist mit sehr hohem Rechenaufwand verbunden. Frühere psychoakustische Studien weisen jedoch darauf hin, dass Menschen eine begrenzte Empfindlichkeit gegenüber den Details des Schallfelds haben, insbesondere im späten Nachhall. Dies birgt das Potential physikalischer Vereinfachungen bei der positionsdynamischen Auralisation von Räumen. Beispielsweise wurden Konzepte vorgeschlagen, die auf der perzeptiven Mixing Time oder der Hörbarkeitsschwelle von frühen Reflexionen basieren, für welche jedoch eine gründliche psychoakustische Bewertung noch aussteht. Zunächst wurde ein Aufbau zur positionsdynamischen Raumauralisation implementiert und evaluiert. Daran untersucht die Arbeit wesentliche Systemparameter wie die erforderliche räumliche Auflösung eines Positionsrasters für die dynamische Anpassung. Da allgemein etablierte Testmethoden zur wahrnehmungsbezogenen Bewertung von räumlichen auditiven Illusionen unter Berücksichtigung interaktiver Hörertranslation fehlten, untersucht die Arbeit verschiedene Ansätze zur Messung der Plausibilität. Auf dieser Grundlage werden physikalische Vereinfachungen im Verlauf des Schallfeldes in positionsdynamischen binauralen Auralisationen der Raumakustik untersucht. Für die Hauptexperimente wurden binaurale Raumimpulsantworten (BRIRs) entlang einer Linie für die Hörertranslation in einem eher trockenen Hörlabor und einem halligen Seminarraum ähnlicher Größe gemessen. Die erstellten Datensätze enthalten Szenarien von Hörerbewegungen auf eine virtuelle Schallquelle zu, daran vorbei, davon weg oder dahinter. Darüber hinaus betrachten die Untersuchungen zwei Extremfälle der Quellenorientierung, um die Auswirkungen einer Variation der Schallquellenrichtcharakteristik zu berücksichtigen. Die BRIR-Sätze werden systematisch bearbeitet und vereinfacht, um die Auswirkungen auf die Wahrnehmung zu bewerten. Insbesondere das Konzept der perzeptiven Mixing Time und manipulierte räumlich-zeitliche Muster früher Reflexionen dienten als Testfälle in den psychoakustischen Studien. Die Ergebnisse zeigen ein hohes Potential für Vereinfachungen, unterstreichen aber auch die Relevanz der genauen Imitation prominenter früher Reflexionen. Die Ergebnisse bestätigen auch das Konzept der wahrnehmungsbezogenen Mixing Time für die betrachteten Fälle der positionsdynamischen binauralen Wiedergabe. Die Beobachtungen verdeutlichen, dass gängige Testszenarien für Auralisierungen, Interpolation und Extrapolation nicht kritisch genug sind, um allgemeine Schlussfolgerungen über die Eignung der getesteten Rendering-Ansätze zu ziehen. Die Arbeit zeigt Lösungsansätze auf.This thesis investigates the effect of simplified acoustic room representations in position-dynamic binaural audio for listener translation. Dynamic binaural synthesis is an audio reproduction method to create spatial auditory illusions over headphones for virtual, augmented, and mixed reality (AR/VR/MR). It has become a typical demand to explore immersive content in six degrees of freedom (6DOF). Realizing dynamic binaural sound field imitations with high physical accuracy requires high computational effort. However, previous psychoacoustic research indicates that humans have limited sensitivity to the details of the sound field. This fact bears the potential to simplify the physics in position-dynamic room auralizations. For example, concepts based on the perceptual mixing time or the audibility threshold of early reflections have been proposed. This thesis investigates the effect of simplified acoustic room representations in position-dynamic binaural audio for listener translation. First, a setup for position dynamic binaural room auralization was implemented and evaluated. Essential system parameters like the required position grid resolution for the audio reproduction were examined. Due to the lack of generally established test methods for the perceptual evaluation of spatial auditory illusions considering interactive listener translation, this thesis explores different approaches for measuring plausibility. Based on this foundation, this work examines physical impairments and simplifications in the progress of the sound field in position dynamic binaural auralizations of room acoustics. For the main experiments, sets of binaural room impulse responses (BRIRs) were measured along a line for listener translation in a relatively dry listening laboratory and a reverberant seminar room of similar size. These sets include scenarios of walking towards a virtual sound source, past it, away from it, or behind it. The consideration of two extreme cases of source orientation took into account the effects of variations in directivity. The BRIR sets were systematically impaired and simplified to evaluate the perceptual effects. Especially the concept of the perceptual mixing time and manipulated spatiotemporal patterns of early reflections served as test cases. The results reveal a high potential for simplification but also underline the relevance of accurately imitating prominent early reflections. The findings confirm the concept of the perceptual mixing time for the considered cases of position-dynamic binaural audio. The observations highlight that common test scenarios for dynamic binaural rendering approaches are not sufficiently critical to draw general conclusions about their suitability. This thesis proposes strategies to solve this
- …