181 research outputs found

    Efficient speaker recognition for mobile devices

    Get PDF

    TWO-DIMENSIONAL GMM-BASED CLUSTERING IN THE PRESENCE OF QUANTIZATION NOISE

    Get PDF
    In this paper, unlike to the commonly considered clustering, wherein data attributes are accurately presented, it is researched how successful clustering can be performed when data attributes are represented with smaller accuracy, i.e. by using the small number of bits. In particular, the effect of data attributes quantization on the two-dimensional two-component Gaussian mixture model (GMM)-based clustering by using expectation–maximization (EM) algorithm is analyzed. An independent quantization of data attributes by using uniform quantizers with the support limits adjusted to the minimal and maximal attribute values is assumed. The analysis makes it possible to determine the number of bits for data presentation that provides the accurate clustering. These findings can be useful in clustering wherein before being grouped the data have to be represented with a finite small number of bits due to their transmission through the bandwidth-limited channel.

    Security in Voice Authentication

    Get PDF
    We evaluate the security of human voice password databases from an information theoretical point of view. More specifically, we provide a theoretical estimation on the amount of entropy in human voice when processed using the conventional GMM-UBM technologies and the MFCCs as the acoustic features. The theoretical estimation gives rise to a methodology for analyzing the security level in a corpus of human voice. That is, given a database containing speech signals, we provide a method for estimating the relative entropy (Kullback-Leibler divergence) of the database thereby establishing the security level of the speaker verification system. To demonstrate this, we analyze the YOHO database, a corpus of voice samples collected from 138 speakers and show that the amount of entropy extracted is less than 14-bits. We also present a practical attack that succeeds in impersonating the voice of any speaker within the corpus with a 98% success probability with as little as 9 trials. The attack will still succeed with a rate of 62.50% if 4 attempts are permitted. Further, based on the same attack rationale, we mount an attack on the ALIZE speaker verification system. We show through experimentation that the attacker can impersonate any user in the database of 69 people with about 25% success rate with only 5 trials. The success rate can achieve more than 50% by increasing the allowed authentication attempts to 20. Finally, when the practical attack is cast in terms of an entropy metric, we find that the theoretical entropy estimate almost perfectly predicts the success rate of the practical attack, giving further credence to the theoretical model and the associated entropy estimation technique

    Acoustic Approaches to Gender and Accent Identification

    Get PDF
    There has been considerable research on the problems of speaker and language recognition from samples of speech. A less researched problem is that of accent recognition. Although this is a similar problem to language identification, diïżœerent accents of a language exhibit more fine-grained diïżœerences between classes than languages. This presents a tougher problem for traditional classification techniques. In this thesis, we propose and evaluate a number of techniques for gender and accent classification. These techniques are novel modifications and extensions to state of the art algorithms, and they result in enhanced performance on gender and accent recognition. The first part of the thesis focuses on the problem of gender identification, and presents a technique that gives improved performance in situations where training and test conditions are mismatched. The bulk of this thesis is concerned with the application of the i-Vector technique to accent identification, which is the most successful approach to acoustic classification to have emerged in recent years. We show that it is possible to achieve high accuracy accent identification without reliance on transcriptions and without utilising phoneme recognition algorithms. The thesis describes various stages in the development of i-Vector based accent classification that improve the standard approaches usually applied for speaker or language identification, which are insuïżœcient. We demonstrate that very good accent identification performance is possible with acoustic methods by considering diïżœerent i-Vector projections, frontend parameters, i-Vector configuration parameters, and an optimised fusion of the resulting i-Vector classifiers we can obtain from the same data. We claim to have achieved the best accent identification performance on the test corpus for acoustic methods, with up to 90% identification rate. This performance is even better than previously reported acoustic-phonotactic based systems on the same corpus, and is very close to performance obtained via transcription based accent identification. Finally, we demonstrate that the utilization of our techniques for speech recognition purposes leads to considerably lower word error rates. Keywords: Accent Identification, Gender Identification, Speaker Identification, Gaussian Mixture Model, Support Vector Machine, i-Vector, Factor Analysis, Feature Extraction, British English, Prosody, Speech Recognition

    An Optimized and Privacy-Preserving System Architecture for Effective Voice Authentication over Wireless Network

    Get PDF
    The speaker authentication systems assist in determining the identity of speaker in audio through distinctive voice characteristics. Accurate speaker authentication over wireless network is becoming more challenging due to phishing assaults over the network. There have been constructed multiple kinds of speech authentication models to employ in multiple applications where voice authentication is a primary focus for user identity verification. However, explored voice authentication models have some limitations related to accuracy and phishing assaults in real-time over wireless network. In research, optimized and privacy-preserving system architecture for effective speaker authentication over a wireless network has been proposed to accurately identify the speaker voice in real-time and prevent phishing assaults over network in more accurate manner. The proposed system achieved very good performance metrics measured accuracy, precision, and recall and the F1 score of the proposed model were98.91%, 96.43%, 95.37%, and 97.99%, respectively. The measured training losses on the epoch 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 were 2.4, 2.1, 1.8, 1.5, 1.2, 0.9, 0.6, 0.3, 0.3, 0.3, and 0.2, respectively. Also, the measured testing losses on the epoch of 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 were 2.2, 2, 1.5, 1.4, 1.1, 0.8, 0.8, 0.7, 0.4, 0.1 and 0.1, respectively. Voice authentication over wireless networks is serious issue due to various phishing attacks and inaccuracy in voice identification. Therefore, this requires huge attention for further research in this field to develop less computationally complex speech authentication systems.Published By: Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP) © Copyright: All rights reserved

    Sketching for Large-Scale Learning of Mixture Models

    Get PDF
    Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data. This sketch is a collection of generalized moments of the underlying probability distribution of the data. It can be computed in a single pass on the training set, and is easily computable on streams or distributed datasets. The proposed framework shares similarities with compressive sensing, which aims at drastically reducing the dimension of high-dimensional signals while preserving the ability to reconstruct them. To perform the estimation task, we derive an iterative algorithm analogous to sparse reconstruction algorithms in the context of linear inverse problems. We exemplify our framework with the compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics on the choice of the sketching procedure and theoretical guarantees of reconstruction. We experimentally show on synthetic data that the proposed algorithm yields results comparable to the classical Expectation-Maximization (EM) technique while requiring significantly less memory and fewer computations when the number of database elements is large. We further demonstrate the potential of the approach on real large-scale data (over 10 8 training samples) for the task of model-based speaker verification. Finally, we draw some connections between the proposed framework and approximate Hilbert space embedding of probability distributions using random features. We show that the proposed sketching operator can be seen as an innovative method to design translation-invariant kernels adapted to the analysis of GMMs. We also use this theoretical framework to derive information preservation guarantees, in the spirit of infinite-dimensional compressive sensing

    Automatic speaker recognition: modelling, feature extraction and effects of clinical environment

    Get PDF
    Speaker recognition is the task of establishing identity of an individual based on his/her voice. It has a significant potential as a convenient biometric method for telephony applications and does not require sophisticated or dedicated hardware. The Speaker Recognition task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker-specific feature parameters from the speech. The features are used to generate statistical models of different speakers. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Current state of the art speaker recognition systems use the Gaussian mixture model (GMM) technique in combination with the Expectation Maximization (EM) algorithm to build the speaker models. The most frequently used features are the Mel Frequency Cepstral Coefficients (MFCC). This thesis investigated areas of possible improvements in the field of speaker recognition. The identified drawbacks of the current speaker recognition systems included: slow convergence rates of the modelling techniques and feature’s sensitivity to changes due aging of speakers, use of alcohol and drugs, changing health conditions and mental state. The thesis proposed a new method of deriving the Gaussian mixture model (GMM) parameters called the EM-ITVQ algorithm. The EM-ITVQ showed a significant improvement of the equal error rates and higher convergence rates when compared to the classical GMM based on the expectation maximization (EM) method. It was demonstrated that features based on the nonlinear model of speech production (TEO based features) provided better performance compare to the conventional MFCCs features. For the first time the effect of clinical depression on the speaker verification rates was tested. It was demonstrated that the speaker verification results deteriorate if the speakers are clinically depressed. The deterioration process was demonstrated using conventional (MFCC) features. The thesis also showed that when replacing the MFCC features with features based on the nonlinear model of speech production (TEO based features), the detrimental effect of the clinical depression on speaker verification rates can be reduced
    • 

    corecore