655 research outputs found

    Automatic Speaker Recognition System in Adverse Conditions — Implication of Noise and Reverberation on System Performance

    Get PDF
    Speaker recognition has been developed and evolved over the past few decades into a supposedly mature technique. Existing methods typically utilize robust features extracted from clean speech. In real-world applications, especially security and forensics related ones, reliability of recognition becomes crucial, meanwhile limited speech samples and adverse acoustic conditions, most notably noise and reverberation, impose further complications. This paper is presented from a study into the behavior of typical speaker recognition systems in adverse retrieval phases. Following a brief review, a speaker recognition system was implemented using the MSR Identity Toolbox by Microsoft. Validation tests were carried out with clean speech and the speech contaminated by noise and/or reverberation of varying degrees. The image source method was adopted to take into account real acoustic conditions in the spaces. Statistical relationships between recognition accuracy and signal to noise ratios or reverberation times have therefore been established. Results show noise and reverberation can, to different extents, degrade the performance of recognition. Both reverberation time and direct to reverberation ratio can affect recognition accuracy. The findings may be used to estimate the accuracy of speaker recognition and further determine the likelihood a particular speaker

    Enhanced Forensic Speaker Verification Using a Combination of DWT and MFCC Feature Warping in the Presence of Noise and Reverberation Conditions

    Get PDF
    © 2013 IEEE. Environmental noise and reverberation conditions severely degrade the performance of forensic speaker verification. Robust feature extraction plays an important role in improving forensic speaker verification performance. This paper investigates the effectiveness of combining features, mel frequency cepstral coefficients (MFCCs), and MFCC extracted from the discrete wavelet transform (DWT) of the speech, with and without feature warping for improving modern identity-vector (i-vector)-based speaker verification performance in the presence of noise and reverberation. The performance of i-vector speaker verification was evaluated using different feature extraction techniques: MFCC, feature-warped MFCC, DWT-MFCC, feature-warped DWT-MFCC, a fusion of DWT-MFCC and MFCC features, and fusion feature-warped DWT-MFCC and feature-warped MFCC features. We evaluated the performance of i-vector speaker verification using the Australian Forensic Voice Comparison and QUT-NOISE databases in the presence of noise, reverberation, and noisy and reverberation conditions. Our results indicate that the fusion of feature-warped DWT-MFCC and feature-warped MFCC is superior to other feature extraction techniques in the presence of environmental noise under the majority of signal-to-noise ratios (SNRs), reverberation, and noisy and reverberation conditions. At 0-dB SNR, the performance of the fusion of feature-warped DWT-MFCC and feature-warped MFCC approach achieves a reduction in average equal error rate of 21.33%, 20.00%, and 13.28% over feature-warped MFCC, respectively, in the presence of various types of environmental noises only, reverberation, and noisy and reverberation environments. The approach can be used for improving the performance of forensic speaker verification and it may be utilized for preparing legal evidence in court
    • …
    corecore