1,553 research outputs found
Semi-Supervised Sound Source Localization Based on Manifold Regularization
Conventional speaker localization algorithms, based merely on the received
microphone signals, are often sensitive to adverse conditions, such as: high
reverberation or low signal to noise ratio (SNR). In some scenarios, e.g. in
meeting rooms or cars, it can be assumed that the source position is confined
to a predefined area, and the acoustic parameters of the environment are
approximately fixed. Such scenarios give rise to the assumption that the
acoustic samples from the region of interest have a distinct geometrical
structure. In this paper, we show that the high dimensional acoustic samples
indeed lie on a low dimensional manifold and can be embedded into a low
dimensional space. Motivated by this result, we propose a semi-supervised
source localization algorithm which recovers the inverse mapping between the
acoustic samples and their corresponding locations. The idea is to use an
optimization framework based on manifold regularization, that involves
smoothness constraints of possible solutions with respect to the manifold. The
proposed algorithm, termed Manifold Regularization for Localization (MRL), is
implemented in an adaptive manner. The initialization is conducted with only
few labelled samples attached with their respective source locations, and then
the system is gradually adapted as new unlabelled samples (with unknown source
locations) are received. Experimental results show superior localization
performance when compared with a recently presented algorithm based on a
manifold learning approach and with the generalized cross-correlation (GCC)
algorithm as a baseline
PSD Estimation of Multiple Sound Sources in a Reverberant Room Using a Spherical Microphone Array
We propose an efficient method to estimate source power spectral densities
(PSDs) in a multi-source reverberant environment using a spherical microphone
array. The proposed method utilizes the spatial correlation between the
spherical harmonics (SH) coefficients of a sound field to estimate source PSDs.
The use of the spatial cross-correlation of the SH coefficients allows us to
employ the method in an environment with a higher number of sources compared to
conventional methods. Furthermore, the orthogonality property of the SH basis
functions saves the effort of designing specific beampatterns of a conventional
beamformer-based method. We evaluate the performance of the algorithm with
different number of sources in practical reverberant and non-reverberant rooms.
We also demonstrate an application of the method by separating source signals
using a conventional beamformer and a Wiener post-filter designed from the
estimated PSDs.Comment: Accepted for WASPAA 201
Online Localization and Tracking of Multiple Moving Speakers in Reverberant Environments
We address the problem of online localization and tracking of multiple moving
speakers in reverberant environments. The paper has the following
contributions. We use the direct-path relative transfer function (DP-RTF), an
inter-channel feature that encodes acoustic information robust against
reverberation, and we propose an online algorithm well suited for estimating
DP-RTFs associated with moving audio sources. Another crucial ingredient of the
proposed method is its ability to properly assign DP-RTFs to audio-source
directions. Towards this goal, we adopt a maximum-likelihood formulation and we
propose to use an exponentiated gradient (EG) to efficiently update
source-direction estimates starting from their currently available values. The
problem of multiple speaker tracking is computationally intractable because the
number of possible associations between observed source directions and physical
speakers grows exponentially with time. We adopt a Bayesian framework and we
propose a variational approximation of the posterior filtering distribution
associated with multiple speaker tracking, as well as an efficient variational
expectation-maximization (VEM) solver. The proposed online localization and
tracking method is thoroughly evaluated using two datasets that contain
recordings performed in real environments.Comment: IEEE Journal of Selected Topics in Signal Processing, 201
Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings
We tackle the multi-party speech recovery problem through modeling the
acoustic of the reverberant chambers. Our approach exploits structured sparsity
models to perform room modeling and speech recovery. We propose a scheme for
characterizing the room acoustic from the unknown competing speech sources
relying on localization of the early images of the speakers by sparse
approximation of the spatial spectra of the virtual sources in a free-space
model. The images are then clustered exploiting the low-rank structure of the
spectro-temporal components belonging to each source. This enables us to
identify the early support of the room impulse response function and its unique
map to the room geometry. To further tackle the ambiguity of the reflection
ratios, we propose a novel formulation of the reverberation model and estimate
the absorption coefficients through a convex optimization exploiting joint
sparsity model formulated upon spatio-spectral sparsity of concurrent speech
representation. The acoustic parameters are then incorporated for separating
individual speech signals through either structured sparse recovery or inverse
filtering the acoustic channels. The experiments conducted on real data
recordings demonstrate the effectiveness of the proposed approach for
multi-party speech recovery and recognition.Comment: 31 page
Sound Source Localization in a Multipath Environment Using Convolutional Neural Networks
The propagation of sound in a shallow water environment is characterized by
boundary reflections from the sea surface and sea floor. These reflections
result in multiple (indirect) sound propagation paths, which can degrade the
performance of passive sound source localization methods. This paper proposes
the use of convolutional neural networks (CNNs) for the localization of sources
of broadband acoustic radiated noise (such as motor vessels) in shallow water
multipath environments. It is shown that CNNs operating on cepstrogram and
generalized cross-correlogram inputs are able to more reliably estimate the
instantaneous range and bearing of transiting motor vessels when the source
localization performance of conventional passive ranging methods is degraded.
The ensuing improvement in source localization performance is demonstrated
using real data collected during an at-sea experiment.Comment: 5 pages, 5 figures, Final draft of paper submitted to 2018 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP)
15-20 April 2018 in Calgary, Alberta, Canada. arXiv admin note: text overlap
with arXiv:1612.0350
Localization of Directional Sound Sources Supported by a priori Information of the Acoustic Environment
Speaker localization with microphone arrays has received significant attention in the past decade as a means for automated speaker tracking of individuals in a closed space for videoconferencing systems, directed speech capture systems, and surveillance systems. Traditional techniques are based on estimating the relative time difference of arrivals (TDOA) between different channels, by utilizing crosscorrelation function. As we show in the context of speaker localization, these estimates yield poor results, due to the joint effect of reverberation and the directivity of sound sources. In this paper, we present a novel method that utilizes a priori acoustic information of the monitored region, which makes it possible to localize directional sound sources by taking the effect of reverberation into account. The proposed method shows significant improvement of performance compared with traditional methods in “noise-free” condition. Further work is required to extend its capabilities to noisy environments
- …