477 research outputs found
Exploring the Encoding Layer and Loss Function in End-to-End Speaker and Language Recognition System
In this paper, we explore the encoding/pooling layer and loss function in the
end-to-end speaker and language recognition system. First, a unified and
interpretable end-to-end system for both speaker and language recognition is
developed. It accepts variable-length input and produces an utterance level
result. In the end-to-end system, the encoding layer plays a role in
aggregating the variable-length input sequence into an utterance level
representation. Besides the basic temporal average pooling, we introduce a
self-attentive pooling layer and a learnable dictionary encoding layer to get
the utterance level representation. In terms of loss function for open-set
speaker verification, to get more discriminative speaker embedding, center loss
and angular softmax loss is introduced in the end-to-end system. Experimental
results on Voxceleb and NIST LRE 07 datasets show that the performance of
end-to-end learning system could be significantly improved by the proposed
encoding layer and loss function.Comment: Accepted for Speaker Odyssey 201
A Novel Windowing Technique for Efficient Computation of MFCC for Speaker Recognition
In this paper, we propose a novel family of windowing technique to compute
Mel Frequency Cepstral Coefficient (MFCC) for automatic speaker recognition
from speech. The proposed method is based on fundamental property of discrete
time Fourier transform (DTFT) related to differentiation in frequency domain.
Classical windowing scheme such as Hamming window is modified to obtain
derivatives of discrete time Fourier transform coefficients. It has been
mathematically shown that the slope and phase of power spectrum are inherently
incorporated in newly computed cepstrum. Speaker recognition systems based on
our proposed family of window functions are shown to attain substantial and
consistent performance improvement over baseline single tapered Hamming window
as well as recently proposed multitaper windowing technique
Speaker recognition utilizing distributed DCT-II based Mel frequency cepstral coefficients and fuzzy vector quantization
In this paper, a new and novel Automatic Speaker Recognition (ASR) system is presented. The new ASR system includes novel feature extraction and vector classification steps utilizing distributed Discrete Cosine Transform (DCT-II) based Mel Frequency Cepstral Coef?cients (MFCC) and Fuzzy Vector Quantization (FVQ). The ASR algorithm utilizes an approach based on MFCC to identify dynamic features that are used for Speaker Recognition (SR)
- âŠ