11 research outputs found

    A Novel Windowing Technique for Efficient Computation of MFCC for Speaker Recognition

    Full text link
    In this paper, we propose a novel family of windowing technique to compute Mel Frequency Cepstral Coefficient (MFCC) for automatic speaker recognition from speech. The proposed method is based on fundamental property of discrete time Fourier transform (DTFT) related to differentiation in frequency domain. Classical windowing scheme such as Hamming window is modified to obtain derivatives of discrete time Fourier transform coefficients. It has been mathematically shown that the slope and phase of power spectrum are inherently incorporated in newly computed cepstrum. Speaker recognition systems based on our proposed family of window functions are shown to attain substantial and consistent performance improvement over baseline single tapered Hamming window as well as recently proposed multitaper windowing technique

    Анализ идентификационных признаков в речевых данных с помощью GMM-UBM системы верификации диктора

    Get PDF
    This paper is devoted to feature selection and evaluation in an automatic text-independent speaker verification task. In order to solve this problem a speaker verification system based on the Gaussian mixture model and the universal background model (GMM-UBM system) was used. The application sphere and challenges of modern systems of automatic speaker identification were considered. Overview of the modern speaker recognition methods and main speech features used in speaker identification is provided. Features extraction process used in this article was examined. Reviewed speech features were used for speaker verification including mel-cepstral coefficients (MFCC), line spectral pairs (LSP), perceptual linear prediction cepstral coefficients (PLP), short-term energy, formant frequencies, fundamental frequency, voicing probability, zero crossing rate (ZCR), jitter and shimmer. The experimental evaluation of the GMM-UBM system using different speech features was conducted on a 50 speaker set and a result is presented. Feature selection was done using the genetic algorithm and the greedy adding and deleting algorithm. Equal error rate (EER) equals 0,579 % when using 256 component Gaussian mixture model and the obtained feature vector. Comparing to standard 14 MFCC vector, 42,1 % of EER improvement was acquired.Данная статья посвящена отбору и оценке речевых признаков, используемых в задаче автоматической текстонезависимой верификации диктора. Для решения поставленной задачи была использована система верификации диктора, основанная на модели Гауссовых смесей и универсальной фоновой модели (GMM-UBM система). Рассмотрены область применения и проблемы современных систем автоматической идентификации диктора. Произведен обзор современных методов идентификации диктора, основных речевых признаков, используемых при решении задачи идентификации диктора, а также рассмотрен процесс извлечения признаков, использованных далее. К рассмотренным признакам относятся мел-кепстральные коэффициенты (MFCC), пары линейного спектра (LSP), кепстральные коэффициенты перцептивного линейного предсказания (PLP), кратковременная энергия, формантные частоты, частота основного тона, вероятность вокализации (voicing probability), частота пересечения нуля (ZCR), джиттер и шиммер. Произведена экспериментальная оценка GMM-UBM системы с применением различных наборов речевых признаков на речевом корпусе, включающем в себя записи 50 дикторов. Признаки отобраны с помощью генетического алгоритма и алгоритма жадного добавления-удаления. Используя 256-компонентные Гауссовы смеси и полученный вектор из 28 признаков, была получена равная ошибка 1-го и 2-го рода (EER), составляющая 0,579 %. По сравнению со стандартным вектором, состоящим из 14 мел-кепстральных коэффициентов, ошибка EER была уменьшена на 42,1 %

    Automatic text-independent speaker verification using convolutional deep belief network

    Get PDF
    Данная статья посвящена применению свёрточных глубоких сетей доверия в качестве средства извлечения речевых признаков из аудиозаписей для решения задачи автоматической, текстонезависимой верификации диктора. В работе описаны область применения и проблемы систем автоматической верификации диктора. Рассмотрены типы современных систем верификации диктора, основные типы речевых признаков, используемых в системах верификации диктора. Описана структура свёрточных глубоких сетей доверия, алгоритм обучения данной сети. Предложено применение речевых признаков, извлекаемых из трёх слоёв обученной свёрточной глубокой сети доверия. Данный подход основан на применении методов анализа изображений как к уже выделенным признакам речевого сигнала, так и для их выделения из слоёв нейронной сети. Произведены экспериментальные исследования предложенных признаков на двух речевых корпусах: собственном речевом корпусе, включающем аудиозаписи 50 дикторов, и речевом корпусе TIMIT, включающем аудиозаписи 630 дикторов. Была произведена оценка точности предложенных признаков с применением классификаторов различного типа. Непосредственное применение данных признаков не дало увеличения точности по сравнению с использованием традиционных речевых признаков, таких как мел-кепстральные коэффициенты. Однако применение данных признаков в составе ансамбля классификаторов позволило достичь уменьшения равной ошибки 1-го и 2-го рода до 0,21% на собственном речевом корпусе и до 0,23% на речевом корпусе TIMIT. This paper is devoted to the use of the convolutional deep belief network as a speech feature extractor for automatic text-independent speaker verification. The paper describes the scope and problems of automatic speaker verification systems. Types of modern speaker verification systems and types of speech features used in speaker verification systems are considered. The structure and learning algorithm of convolutional deep belief networks is described. The use of speech features extracted from three layers of a trained convolution deep belief network is proposed. Experimental studies of the proposed features were performed on two speech corpora: own speech corpus including audio recordings of 50 speakers and TIMIT speech corpus including audio recordings of 630 speakers. The accuracy of the proposed features was assessed using different types of classifiers. Direct use of these features did not increase the accuracy compared to the use of traditional spectral speech features, such as mel-frequency cepstral coefficients. However, the use of these features in the classifiers ensemble made it possible to achieve a reduction of the equal error rate to 0.21% on 50-speaker speech corpus and to 0.23% on the TIMIT speech corpus.Результаты были получены в рамках выполнения базовой части государственного задания Минобрнауки России, проект 8.9628.2017/8.9

    Improving the performance of MFCC for Persian robust speech recognition

    Get PDF
    The Mel Frequency cepstral coefficients are the most widely used feature in speech recognition but they are very sensitive to noise. In this paper to achieve a satisfactorily performance in Automatic Speech Recognition (ASR) applications we introduce a noise robust new set of MFCC vector estimated through following steps. First, spectral mean normalization is a pre-processing which applies to the noisy original speech signal. The pre-emphasized original  speech segmented into overlapping time frames, then it is windowed by a modified hamming window .Higher order autocorrelation coefficients are extracted. The next step is to eliminate the lower order of the autocorrelation coefficients. The consequence pass from FFT block and then power spectrum of output is calculated. A Gaussian shape filter bank is applied to the results. Logarithm and two compensator blocks form which one is mean subtraction and the other one are root block applied to the results and DCT transformation is the last step. We use MLP neural network to evaluate the performance of proposed MFCC method and to classify the results. Some speech recognition experiments for various tasks indicate that the proposed algorithm is more robust than traditional ones in noisy condition

    Voice Disorder Classification Based on Multitaper Mel Frequency Cepstral Coefficients Features

    Get PDF
    The Mel Frequency Cepstral Coefficients (MFCCs) are widely used in order to extract essential information from a voice signal and became a popular feature extractor used in audio processing. However, MFCC features are usually calculated from a single window (taper) characterized by large variance. This study shows investigations on reducing variance for the classification of two different voice qualities (normal voice and disordered voice) using multitaper MFCC features. We also compare their performance by newly proposed windowing techniques and conventional single-taper technique. The results demonstrate that adapted weighted Thomson multitaper method could distinguish between normal voice and disordered voice better than the results done by the conventional single-taper (Hamming window) technique and two newly proposed windowing methods. The multitaper MFCC features may be helpful in identifying voices at risk for a real pathology that has to be proven later

    Modified cyclic shift tree denoising technique with fewer number of sweep for wave V detection

    Get PDF
    Nowadays, in developing countries Newborn Hearing Screening (NHS) has become one of the most important recommendations in modern pediatric audiology due to the important of early detection for newborn as the first six month of age are the critical period for learning communication. Auditory Brainstem Response (ABR) is an electrophysiological response in the electroencephalography generated in the brainstem in response to the acoustical stimulus. The conventional method used previously was accurate, but it is time consuming especially with the presence of noise interference. The objective of this research is to reduce screening time by implementing enhanced signal processing method and also to reduce the influence of noise interference. This thesis applies Wavelet Kalman Filter (WKF), Cyclic Shift Tree Denoising (CSTD) and Modified Cyclic Shift Tree Denoising (MCSTD) to overcome these problems. The modified approach MSCTD is a modification from CSTD where it is a combination of the wavelet, KF and CSTD. The modified approach was compared to the averaging, WKF and CSTD to analyze an effective wavelet method for denoising that can give the rapid and accurate extraction of ABRs. Results show that the MCSTD outperform the other methods and giving the highest SNR value and able to detect wave V until reduce sweeps number of 512 and 1024 respectively for chirp and click stimulus

    Speaker Recognition Using Convolutional Neural Network and Neutrosophic

    Get PDF
    Speaker recognition is a process of recognizing persons based on their voice which is widely used in many applications. Although many researches have been performed in this domain, there are some challenges that have not been addressed yet. In this research, Neutrosophic (NS) theory and convolutional neural networks (CNN) are used to improve the accuracy of speaker recognition systems. To do this, at first, the spectrogram of the signal is created from the speech signal and then transferred to the NS domain. In the next step, the alpha correction operator is applied repeatedly until reaching constant entropy in subsequent iterations. Finally, a convolutional neural networks architecture is proposed to classify spectrograms in the NS domain. Two datasets TIMIT and Aurora2 are used to evaluate the effectiveness of the proposed method. The precision of the proposed method on two datasets TIMIT and Aurora2 are 93.79% and 95.24%, respectively, demonstrating that the proposed model outperforms competitive models

    Quality Measures for Speaker Verification with Short Utterances

    Get PDF
    The performances of the automatic speaker verification (ASV) systems degrade due to the reduction in the amount of speech used for enrollment and verification. Combining multiple systems based on different features and classifiers considerably reduces speaker verification error rate with short utterances. This work attempts to incorporate supplementary information during the system combination process. We use quality of the estimated model parameters as supplementary information. We introduce a class of novel quality measures formulated using the zero-order sufficient statistics used during the i-vector extraction process. We have used the proposed quality measures as side information for combining ASV systems based on Gaussian mixture model-universal background model (GMM-UBM) and i-vector. The proposed methods demonstrate considerable improvement in speaker recognition performance on NIST SRE corpora, especially in short duration conditions. We have also observed improvement over existing systems based on different duration-based quality measures.Comment: Accepted for publication in Digital Signal Processing: A Review Journa
    corecore