418 research outputs found
Regression and Classification for Direction-of-Arrival Estimation with Convolutional Recurrent Neural Networks
We present a novel learning-based approach to estimate the
direction-of-arrival (DOA) of a sound source using a convolutional recurrent
neural network (CRNN) trained via regression on synthetic data and Cartesian
labels. We also describe an improved method to generate synthetic data to train
the neural network using state-of-the-art sound propagation algorithms that
model specular as well as diffuse reflections of sound. We compare our model
against three other CRNNs trained using different formulations of the same
problem: classification on categorical labels, and regression on spherical
coordinate labels. In practice, our model achieves up to 43% decrease in
angular error over prior methods. The use of diffuse reflection results in 34%
and 41% reduction in angular prediction errors on LOCATA and SOFA datasets,
respectively, over prior methods based on image-source methods. Our method
results in an additional 3% error reduction over prior schemes that use
classification based networks, and we use 36% fewer network parameters
Sound Event Localization, Detection, and Tracking by Deep Neural Networks
In this thesis, we present novel sound representations and classification methods for the task of sound event localization, detection, and tracking (SELDT). The human auditory system has evolved to localize multiple sound events, recognize and further track their motion individually in an acoustic environment. This ability of humans makes them context-aware and enables them to interact with their surroundings naturally. Developing similar methods for machines will provide an automatic description of social and human activities around them and enable machines to be context-aware similar to humans. Such methods can be employed to assist the hearing impaired to visualize sounds, for robot navigation, and to monitor biodiversity, the home, and cities.
A real-life acoustic scene is complex in nature, with multiple sound events that are temporally and spatially overlapping, including stationary and moving events with varying angular velocities. Additionally, each individual sound event class, for example, a car horn can have a lot of variabilities, i.e., different cars have different horns, and within the same model of the car, the duration and the temporal structure of the horn sound is driver dependent. Performing SELDT in such overlapping and dynamic sound scenes while being robust is challenging for machines. Hence we propose to investigate the SELDT task in this thesis and use a data-driven approach using deep neural networks (DNNs).
The sound event detection (SED) task requires the detection of onset and offset time for individual sound events and their corresponding labels. In this regard, we propose to use spatial and perceptual features extracted from multichannel audio for SED using two different DNNs, recurrent neural networks (RNNs) and convolutional recurrent neural networks (CRNNs). We show that using multichannel audio features improves the SED performance for overlapping sound events in comparison to traditional single-channel audio features. The proposed novel features and methods produced state-of-the-art performance for the real-life SED task and won the IEEE AASP DCASE challenge consecutively in 2016 and 2017.
Sound event localization is the task of spatially locating the position of individual sound events. Traditionally, this has been approached using parametric methods. In this thesis, we propose a CRNN for detecting the azimuth and elevation angles of multiple temporally overlapping sound events. This is the first DNN-based method performing localization in complete azimuth and elevation space. In comparison to parametric methods which require the information of the number of active sources, the proposed method learns this information directly from the input data and estimates their respective spatial locations. Further, the proposed CRNN is shown to be more robust than parametric methods in reverberant scenarios.
Finally, the detection and localization tasks are performed jointly using a CRNN. This method additionally tracks the spatial location with time, thus producing the SELDT results. This is the first DNN-based SELDT method and is shown to perform equally with stand-alone baselines for SED, localization, and tracking. The proposed SELDT method is evaluated on nine datasets that represent anechoic and reverberant sound scenes, stationary and moving sources with varying velocities, a different number of overlapping sound events and different microphone array formats. The results show that the SELDT method can track multiple overlapping sound events that are both spatially stationary and moving
Towards End-to-End Acoustic Localization using Deep Learning: from Audio Signal to Source Position Coordinates
This paper presents a novel approach for indoor acoustic source localization
using microphone arrays and based on a Convolutional Neural Network (CNN). The
proposed solution is, to the best of our knowledge, the first published work in
which the CNN is designed to directly estimate the three dimensional position
of an acoustic source, using the raw audio signal as the input information
avoiding the use of hand crafted audio features. Given the limited amount of
available localization data, we propose in this paper a training strategy based
on two steps. We first train our network using semi-synthetic data, generated
from close talk speech recordings, and where we simulate the time delays and
distortion suffered in the signal that propagates from the source to the array
of microphones. We then fine tune this network using a small amount of real
data. Our experimental results show that this strategy is able to produce
networks that significantly improve existing localization methods based on
\textit{SRP-PHAT} strategies. In addition, our experiments show that our CNN
method exhibits better resistance against varying gender of the speaker and
different window sizes compared with the other methods.Comment: 18 pages, 3 figures, 8 table
L3DAS21 Challenge: Machine Learning for 3D Audio Signal Processing
The L3DAS21 Challenge is aimed at encouraging and fostering collaborative
research on machine learning for 3D audio signal processing, with particular
focus on 3D speech enhancement (SE) and 3D sound localization and detection
(SELD). Alongside with the challenge, we release the L3DAS21 dataset, a 65
hours 3D audio corpus, accompanied with a Python API that facilitates the data
usage and results submission stage. Usually, machine learning approaches to 3D
audio tasks are based on single-perspective Ambisonics recordings or on arrays
of single-capsule microphones. We propose, instead, a novel multichannel audio
configuration based multiple-source and multiple-perspective Ambisonics
recordings, performed with an array of two first-order Ambisonics microphones.
To the best of our knowledge, it is the first time that a dual-mic Ambisonics
configuration is used for these tasks. We provide baseline models and results
for both tasks, obtained with state-of-the-art architectures: FaSNet for SE and
SELDNet for SELD. This report is aimed at providing all needed information to
participate in the L3DAS21 Challenge, illustrating the details of the L3DAS21
dataset, the challenge tasks and the baseline models.Comment: Documentation paper for the L3DAS21 Challenge for IEEE MLSP 2021.
Further information on www.l3das.com/mlsp202
Realistic multi-microphone data simulation for distant speech recognition
The availability of realistic simulated corpora is of key importance for the
future progress of distant speech recognition technology. The reliability,
flexibility and low computational cost of a data simulation process may
ultimately allow researchers to train, tune and test different techniques in a
variety of acoustic scenarios, avoiding the laborious effort of directly
recording real data from the targeted environment.
In the last decade, several simulated corpora have been released to the
research community, including the data-sets distributed in the context of
projects and international challenges, such as CHiME and REVERB. These efforts
were extremely useful to derive baselines and common evaluation frameworks for
comparison purposes. At the same time, in many cases they highlighted the need
of a better coherence between real and simulated conditions.
In this paper, we examine this issue and we describe our approach to the
generation of realistic corpora in a domestic context. Experimental validation,
conducted in a multi-microphone scenario, shows that a comparable performance
trend can be observed with both real and simulated data across different
recognition frameworks, acoustic models, as well as multi-microphone processing
techniques.Comment: Proc. of Interspeech 201
- …