2 research outputs found
A Deep Neural Network for Short-Segment Speaker Recognition
Todays interactive devices such as smart-phone assistants and smart speakers
often deal with short-duration speech segments. As a result, speaker
recognition systems integrated into such devices will be much better suited
with models capable of performing the recognition task with short-duration
utterances. In this paper, a new deep neural network, UtterIdNet, capable of
performing speaker recognition with short speech segments is proposed. Our
proposed model utilizes a novel architecture that makes it suitable for
short-segment speaker recognition through an efficiently increased use of
information in short speech segments. UtterIdNet has been trained and tested on
the VoxCeleb datasets, the latest benchmarks in speaker recognition.
Evaluations for different segment durations show consistent and stable
performance for short segments, with significant improvement over the previous
models for segments of 2 seconds, 1 second, and especially sub-second durations
(250 ms and 500 ms).Comment: Accepted in Interspeech 201
Speaker Recognition Based on Deep Learning: An Overview
Speaker recognition is a task of identifying persons from their voices.
Recently, deep learning has dramatically revolutionized speaker recognition.
However, there is lack of comprehensive reviews on the exciting progress.
In this paper, we review several major subtasks of speaker recognition,
including speaker verification, identification, diarization, and robust speaker
recognition, with a focus on deep-learning-based methods. Because the major
advantage of deep learning over conventional methods is its representation
ability, which is able to produce highly abstract embedding features from
utterances, we first pay close attention to deep-learning-based speaker feature
extraction, including the inputs, network structures, temporal pooling
strategies, and objective functions respectively, which are the fundamental
components of many speaker recognition subtasks. Then, we make an overview of
speaker diarization, with an emphasis of recent supervised, end-to-end, and
online diarization. Finally, we survey robust speaker recognition from the
perspectives of domain adaptation and speech enhancement, which are two major
approaches of dealing with domain mismatch and noise problems. Popular and
recently released corpora are listed at the end of the paper