5,240 research outputs found
Adversarial Network Bottleneck Features for Noise Robust Speaker Verification
In this paper, we propose a noise robust bottleneck feature representation
which is generated by an adversarial network (AN). The AN includes two cascade
connected networks, an encoding network (EN) and a discriminative network (DN).
Mel-frequency cepstral coefficients (MFCCs) of clean and noisy speech are used
as input to the EN and the output of the EN is used as the noise robust
feature. The EN and DN are trained in turn, namely, when training the DN, noise
types are selected as the training labels and when training the EN, all labels
are set as the same, i.e., the clean speech label, which aims to make the AN
features invariant to noise and thus achieve noise robustness. We evaluate the
performance of the proposed feature on a Gaussian Mixture Model-Universal
Background Model based speaker verification system, and make comparison to MFCC
features of speech enhanced by short-time spectral amplitude minimum mean
square error (STSA-MMSE) and deep neural network-based speech enhancement
(DNN-SE) methods. Experimental results on the RSR2015 database show that the
proposed AN bottleneck feature (AN-BN) dramatically outperforms the STSA-MMSE
and DNN-SE based MFCCs for different noise types and signal-to-noise ratios.
Furthermore, the AN-BN feature is able to improve the speaker verification
performance under the clean condition
The INTERSPEECH 2020 Far-Field Speaker Verification Challenge
The INTERSPEECH 2020 Far-Field Speaker Verification Challenge (FFSVC 2020)
addresses three different research problems under well-defined conditions:
far-field text-dependent speaker verification from single microphone array,
far-field text-independent speaker verification from single microphone array,
and far-field text-dependent speaker verification from distributed microphone
arrays. All three tasks pose a cross-channel challenge to the participants. To
simulate the real-life scenario, the enrollment utterances are recorded from
close-talk cellphone, while the test utterances are recorded from the far-field
microphone arrays. In this paper, we describe the database, the challenge, and
the baseline system, which is based on a ResNet-based deep speaker network with
cosine similarity scoring. For a given utterance, the speaker embeddings of
different channels are equally averaged as the final embedding. The baseline
system achieves minDCFs of 0.62, 0.66, and 0.64 and EERs of 6.27%, 6.55%, and
7.18% for task 1, task 2, and task 3, respectively.Comment: Submitted to INTERSPEECH 202
Speaker Representation Learning using Global Context Guided Channel and Time-Frequency Transformations
In this study, we propose the global context guided channel and
time-frequency transformations to model the long-range, non-local
time-frequency dependencies and channel variances in speaker representations.
We use the global context information to enhance important channels and
recalibrate salient time-frequency locations by computing the similarity
between the global context and local features. The proposed modules, together
with a popular ResNet based model, are evaluated on the VoxCeleb1 dataset,
which is a large scale speaker verification corpus collected in the wild. This
lightweight block can be easily incorporated into a CNN model with little
additional computational costs and effectively improves the speaker
verification performance compared to the baseline ResNet-LDE model and the
Squeeze&Excitation block by a large margin. Detailed ablation studies are also
performed to analyze various factors that may impact the performance of the
proposed modules. We find that by employing the proposed L2-tf-GTFC
transformation block, the Equal Error Rate decreases from 4.56% to 3.07%, a
relative 32.68% reduction, and a relative 27.28% improvement in terms of the
DCF score. The results indicate that our proposed global context guided
transformation modules can efficiently improve the learned speaker
representations by achieving time-frequency and channel-wise feature
recalibration.Comment: Accepted to Interspeech 202
Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances
Currently, the most widely used approach for speaker verification is the deep
speaker embedding learning. In this approach, we obtain a speaker embedding
vector by pooling single-scale features that are extracted from the last layer
of a speaker feature extractor. Multi-scale aggregation (MSA), which utilizes
multi-scale features from different layers of the feature extractor, has
recently been introduced and shows superior performance for variable-duration
utterances. To increase the robustness dealing with utterances of arbitrary
duration, this paper improves the MSA by using a feature pyramid module. The
module enhances speaker-discriminative information of features from multiple
layers via a top-down pathway and lateral connections. We extract speaker
embeddings using the enhanced features that contain rich speaker information
with different time scales. Experiments on the VoxCeleb dataset show that the
proposed module improves previous MSA methods with a smaller number of
parameters. It also achieves better performance than state-of-the-art
approaches for both short and long utterances.Comment: Accepted to Interspeech 202
A Review on Speech Recognition Methods
Voice recognition is the identification of a speaker on the basis of the characteristics of voices. For this, features of speech patterns that differ between individuals are used to achieve the objective. In this paper speaker recognition system are discussed. Implementation of speaker's voice recognition system with MATLAB makes possible use of voice for real life applications. This paper provides a brief review of different DSP based techniques applied for speech recognition
Rhythm-Flexible Voice Conversion without Parallel Data Using Cycle-GAN over Phoneme Posteriorgram Sequences
Speaking rate refers to the average number of phonemes within some unit time,
while the rhythmic patterns refer to duration distributions for realizations of
different phonemes within different phonetic structures. Both are key
components of prosody in speech, which is different for different speakers.
Models like cycle-consistent adversarial network (Cycle-GAN) and variational
auto-encoder (VAE) have been successfully applied to voice conversion tasks
without parallel data. However, due to the neural network architectures and
feature vectors chosen for these approaches, the length of the predicted
utterance has to be fixed to that of the input utterance, which limits the
flexibility in mimicking the speaking rates and rhythmic patterns for the
target speaker. On the other hand, sequence-to-sequence learning model was used
to remove the above length constraint, but parallel training data are needed.
In this paper, we propose an approach utilizing sequence-to-sequence model
trained with unsupervised Cycle-GAN to perform the transformation between the
phoneme posteriorgram sequences for different speakers. In this way, the length
constraint mentioned above is removed to offer rhythm-flexible voice conversion
without requiring parallel data. Preliminary evaluation on two datasets showed
very encouraging results.Comment: 8 pages, 6 figures, Submitted to SLT 201
Single-Microphone Speech Enhancement and Separation Using Deep Learning
The cocktail party problem comprises the challenging task of understanding a
speech signal in a complex acoustic environment, where multiple speakers and
background noise signals simultaneously interfere with the speech signal of
interest. A signal processing algorithm that can effectively increase the
speech intelligibility and quality of speech signals in such complicated
acoustic situations is highly desirable. Especially for applications involving
mobile communication devices and hearing assistive devices. Due to the
re-emergence of machine learning techniques, today, known as deep learning, the
challenges involved with such algorithms might be overcome. In this PhD thesis,
we study and develop deep learning-based techniques for two sub-disciplines of
the cocktail party problem: single-microphone speech enhancement and
single-microphone multi-talker speech separation. Specifically, we conduct
in-depth empirical analysis of the generalizability capability of modern deep
learning-based single-microphone speech enhancement algorithms. We show that
performance of such algorithms is closely linked to the training data, and good
generalizability can be achieved with carefully designed training data.
Furthermore, we propose uPIT, a deep learning-based algorithm for
single-microphone speech separation and we report state-of-the-art results on a
speaker-independent multi-talker speech separation task. Additionally, we show
that uPIT works well for joint speech separation and enhancement without
explicit prior knowledge about the noise type or number of speakers. Finally,
we show that deep learning-based speech enhancement algorithms designed to
minimize the classical short-time spectral amplitude mean squared error leads
to enhanced speech signals which are essentially optimal in terms of STOI, a
state-of-the-art speech intelligibility estimator.Comment: PhD Thesis. 233 page
- …