3,252 research outputs found
Speech Enhancement and Dereverberation with Diffusion-based Generative Models
In this work, we build upon our previous publication and use diffusion-based
generative models for speech enhancement. We present a detailed overview of the
diffusion process that is based on a stochastic differential equation and delve
into an extensive theoretical examination of its implications. Opposed to usual
conditional generation tasks, we do not start the reverse process from pure
Gaussian noise but from a mixture of noisy speech and Gaussian noise. This
matches our forward process which moves from clean speech to noisy speech by
including a drift term. We show that this procedure enables using only 30
diffusion steps to generate high-quality clean speech estimates. By adapting
the network architecture, we are able to significantly improve the speech
enhancement performance, indicating that the network, rather than the
formalism, was the main limitation of our original approach. In an extensive
cross-dataset evaluation, we show that the improved method can compete with
recent discriminative models and achieves better generalization when evaluating
on a different corpus than used for training. We complement the results with an
instrumental evaluation using real-world noisy recordings and a listening
experiment, in which our proposed method is rated best. Examining different
sampler configurations for solving the reverse process allows us to balance the
performance and computational speed of the proposed method. Moreover, we show
that the proposed method is also suitable for dereverberation and thus not
limited to additive background noise removal. Code and audio examples are
available online, see https://github.com/sp-uhh/sgmseComment: Accepted versio
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
Speech Recognition in noisy environment using Deep Learning Neural Network
Recent researches in the field of automatic speaker recognition have shown that methods based
on deep learning neural networks provide better performance than other statistical classifiers. On
the other hand, these methods usually require adjustment of a significant number of parameters.
The goal of this thesis is to show that selecting appropriate value of parameters can significantly
improve speaker recognition performance of methods based on deep learning neural networks.
The reported study introduces an approach to automatic speaker recognition based on deep
neural networks and the stochastic gradient descent algorithm. It particularly focuses on three
parameters of the stochastic gradient descent algorithm: the learning rate, and the hidden and
input layer dropout rates. Additional attention was devoted to the research question of speaker
recognition under noisy conditions.
Thus, two experiments were conducted in the scope of this thesis. The first experiment was
intended to demonstrate that the optimization of the observed parameters of the stochastic
gradient descent algorithm can improve speaker recognition performance under no presence of
noise. This experiment was conducted in two phases. In the first phase, the recognition rate is
observed when the hidden layer dropout rate and the learning rate are varied, while the input
layer dropout rate was constant. In the second phase of this experiment, the recognition rate is
observed when the input layers dropout rate and learning rate are varied, while the hidden layer
dropout rate was constant. The second experiment was intended to show that the optimization of
the observed parameters of the stochastic gradient descent algorithm can improve speaker
recognition performance even under noisy conditions. Thus, different noise levels were
artificially applied on the original speech signal
- …