10 research outputs found

    Audio-Visual Biometrics and Forgery

    Get PDF

    Optimization of data-driven filterbank for automatic speaker verification

    Get PDF
    Most of the speech processing applications use triangular filters spaced in mel-scale for feature extraction. In this paper, we propose a new data-driven filter design method which optimizes filter parameters from a given speech data. First, we introduce a frame-selection based approach for developing speech-signal-based frequency warping scale. Then, we propose a new method for computing the filter frequency responses by using principal component analysis (PCA). The main advantage of the proposed method over the recently introduced deep learning based methods is that it requires very limited amount of unlabeled speech-data. We demonstrate that the proposed filterbank has more speaker discriminative power than commonly used mel filterbank as well as existing data-driven filterbank. We conduct automatic speaker verification (ASV) experiments with different corpora using various classifier back-ends. We show that the acoustic features created with proposed filterbank are better than existing mel-frequency cepstral coefficients (MFCCs) and speech-signal-based frequency cepstral coefficients (SFCCs) in most cases. In the experiments with VoxCeleb1 and popular i-vector back-end, we observe 9.75% relative improvement in equal error rate (EER) over MFCCs. Similarly, the relative improvement is 4.43% with recently introduced x-vector system. We obtain further improvement using fusion of the proposed method with standard MFCC-based approach.Comment: Published in Digital Signal Processing journal (Elsevier

    Voice conversion versus speaker verification: an overview

    Get PDF
    A speaker verification system automatically accepts or rejects a claimed identity of a speaker based on a speech sample. Recently, a major progress was made in speaker verification which leads to mass market adoption, such as in smartphone and in online commerce for user authentication. A major concern when deploying speaker verification technology is whether a system is robust against spoofing attacks. Speaker verification studies provided us a good insight into speaker characterization, which has contributed to the progress of voice conversion technology. Unfortunately, voice conversion has become one of the most easily accessible techniques to carry out spoofing attacks; therefore, presents a threat to speaker verification systems. In this paper, we will briefly introduce the fundamentals of voice conversion and speaker verification technologies. We then give an overview of recent spoofing attack studies under different conditions with a focus on voice conversion spoofing attack. We will also discuss anti-spoofing attack measures for speaker verification.Published versio

    Discriminative and generative approaches for long- and short-term speaker characteristics modeling : application to speaker verification

    Get PDF
    The speaker verification problem can be stated as follows: given two speech recordings, determine whether or not they have been uttered by the same speaker. Most current speaker verification systems are based on Gaussian mixture models. This probabilistic representation allows to adequately model the complex distribution of the underlying speech feature parameters. It however represents an inadequate basis for discriminating between speakers, which is the key issue in the area of speaker verification. In the first part of this thesis, we attempt to overcome these difficulties by proposing to combine support vector machines, a well established discriminative modeling, with two generative approaches based on Gaussian mixture models. In the first generative approach, a target speaker is represented by a Gaussian mixture model corresponding to a Maximum A Posteriori adaptation of a large Gaussian mixture model, coined universal background model, to the target speaker data. The second generative approach is the Joint Factor Analysis that has become the state-of-the-art in the field of speaker verification during the last three years. The advantage of this technique is that it provides a framework of powerful tools for modeling the inter-speaker and channel variabilities. We propose and test several kernel functions that are integrated in the design of both previous combinations. The best results are obtained when the support vector machines are applied within a new space called the "total variability space", defined using the factor analysis. In this novel modeling approach, the channel effect is treated through a combination of linear discnminant analysis and kemel normalization based on the inverse of the within covariance matrix of the speaker. In the second part of this thesis, we present a new approach to modeling the speaker's longterm prosodic and spectral characteristics. This novel approach is based on continuous approximations of the prosodic and cepstral contours contained in a pseudo-syllabic segment of speech. Each of these contours is fitted to a Legendre polynomial, whose coefficients are modeled by a Gaussian mixture model. The joint factor analysis is used to treat the speaker and channel variabilities. Finally, we perform a scores fusion between systems based on long-term speaker characteristics with those described above that use short-term speaker features

    Anti-Spoofing for Text-Independent Speaker Verification: An Initial Database, Comparison of Countermeasures, and Human Performance

    Get PDF
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.In this paper, we present a systematic study of the vulnerability of automatic speaker verification to a diverse range of spoofing attacks. We start with a thorough analysis of the spoofing effects of five speech synthesis and eight voice conversion systems, and the vulnerability of three speaker verification systems under those attacks. We then introduce a number of countermeasures to prevent spoofing attacks from both known and unknown attackers. Known attackers are spoofing systems whose output was used to train the countermeasures, while an unknown attacker is a spoofing system whose output was not available to the countermeasures during training. Finally, we benchmark automatic systems against human performance on both speaker verification and spoofing detection tasks.EPSRC ; TÃœBÄ°TA

    Automatic speaker recognition: modelling, feature extraction and effects of clinical environment

    Get PDF
    Speaker recognition is the task of establishing identity of an individual based on his/her voice. It has a significant potential as a convenient biometric method for telephony applications and does not require sophisticated or dedicated hardware. The Speaker Recognition task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker-specific feature parameters from the speech. The features are used to generate statistical models of different speakers. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Current state of the art speaker recognition systems use the Gaussian mixture model (GMM) technique in combination with the Expectation Maximization (EM) algorithm to build the speaker models. The most frequently used features are the Mel Frequency Cepstral Coefficients (MFCC). This thesis investigated areas of possible improvements in the field of speaker recognition. The identified drawbacks of the current speaker recognition systems included: slow convergence rates of the modelling techniques and feature’s sensitivity to changes due aging of speakers, use of alcohol and drugs, changing health conditions and mental state. The thesis proposed a new method of deriving the Gaussian mixture model (GMM) parameters called the EM-ITVQ algorithm. The EM-ITVQ showed a significant improvement of the equal error rates and higher convergence rates when compared to the classical GMM based on the expectation maximization (EM) method. It was demonstrated that features based on the nonlinear model of speech production (TEO based features) provided better performance compare to the conventional MFCCs features. For the first time the effect of clinical depression on the speaker verification rates was tested. It was demonstrated that the speaker verification results deteriorate if the speakers are clinically depressed. The deterioration process was demonstrated using conventional (MFCC) features. The thesis also showed that when replacing the MFCC features with features based on the nonlinear model of speech production (TEO based features), the detrimental effect of the clinical depression on speaker verification rates can be reduced

    Utilización de la fase armónica en la detección de voz sintética.

    Get PDF
    156 p.Los sistemas de verificación de locutor (SV) tienen que enfrentarse a la posibilidad de ser atacados mediante técnicas de spoofing. Hoy en día, las tecnologías de conversión de voces y de síntesis de voz adaptada a locutor han avanzado lo suficiente para poder crear voces que sean capaces de engañar a un sistema SV. En esta tesis se propone un módulo de detección de habla sintética (SSD) que puede utilizarse como complemento a un sistema SV, pero que es capaz de funcionar de manera independiente. Lo conforma un clasificador basado en GMM, dotado de modelos de habla humana y sintética. Cada entrada se compara con ambos, y, si la diferencia de verosimilitudes supera un determinado umbral, se acepta como humana, rechazándose en caso contrario. El sistema desarrollado es independiente de locutor. Para la generación de modelos se utilizarán parámetros RPS. Se propone una técnica para reducir la complejidad del proceso de entrenamiento, evitando generar TTSs adaptados o un conversor de voz para cada locutor. Para ello, como la mayoría de los sistemas de adaptación o síntesis modernos hacen uso de vocoders, se propone transcodificar las señales humanas mediante vocoders para obtener de esta forma sus versiones sintéticas, con las que se generarán los modelos sintéticos del clasificador. Se demostrará que se pueden detectar señales sintéticas detectando que se crearon mediante un vocoder. El rendimiento del sistema prueba en diferentes condiciones: con las propias señales transcodificadas o con ataques TTS. Por último, se plantean estrategias para el entrenamiento de modelos para sistemas SSD

    Automatic speech recognition of Cantonese-English code-mixing utterances.

    Get PDF
    Chan Yeuk Chi Joyce.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references.Abstracts in English and Chinese.Chapter Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Previous Work on Code-switching Speech Recognition --- p.2Chapter 1.2.1 --- Keyword Spotting Approach --- p.3Chapter 1.2.2 --- Translation Approach --- p.4Chapter 1.2.3 --- Language Boundary Detection --- p.6Chapter 1.3 --- Motivations of Our Work --- p.7Chapter 1.4 --- Methodology --- p.8Chapter 1.5 --- Thesis Outline --- p.10Chapter 1.6 --- References --- p.11Chapter Chapter 2 --- Fundamentals of Large Vocabulary Continuous Speech Recognition for Cantonese and English --- p.14Chapter 2.1 --- Basic Theory of Speech Recognition --- p.14Chapter 2.1.1 --- Feature Extraction --- p.14Chapter 2.1.2 --- Maximum a Posteriori (MAP) Probability --- p.15Chapter 2.1.3 --- Hidden Markov Model (HMM) --- p.16Chapter 2.1.4 --- Statistical Language Modeling --- p.17Chapter 2.1.5 --- Search A lgorithm --- p.18Chapter 2.2 --- Word Posterior Probability (WPP) --- p.19Chapter 2.3 --- Generalized Word Posterior Probability (GWPP) --- p.23Chapter 2.4 --- Characteristics of Cantonese --- p.24Chapter 2.4.1 --- Cantonese Phonology --- p.24Chapter 2.4.2 --- Variation and Change in Pronunciation --- p.27Chapter 2.4.3 --- Syllables and Characters in Cantonese --- p.28Chapter 2.4.4 --- Spoken Cantonese vs. Written Chinese --- p.28Chapter 2.5 --- Characteristics of English --- p.30Chapter 2.5.1 --- English Phonology --- p.30Chapter 2.5.2 --- English with Cantonese Accents --- p.31Chapter 2.6 --- References --- p.32Chapter Chapter 3 --- Code-mixing and Code-switching Speech Recognition --- p.35Chapter 3.1 --- Introduction --- p.35Chapter 3.2 --- Definition --- p.35Chapter 3.2.1 --- Monolingual Speech Recognition --- p.35Chapter 3.2.2 --- Multilingual Speech Recognition --- p.35Chapter 3.2.3 --- Code-mixing and Code-switching --- p.36Chapter 3.3 --- Conversation in Hong Kong --- p.38Chapter 3.3.1 --- Language Choice of Hong Kong People --- p.38Chapter 3.3.2 --- Reasons for Code-mixing in Hong Kong --- p.40Chapter 3.3.3 --- How Does Code-mixing Occur? --- p.41Chapter 3.4 --- Difficulties for Code-mixing - Specific to Cantonese-English --- p.44Chapter 3.4.1 --- Phonetic Differences --- p.45Chapter 3.4.2 --- Phonology difference --- p.48Chapter 3.4.3 --- Accent and Borrowing --- p.49Chapter 3.4.4 --- Lexicon and Grammar --- p.49Chapter 3.4.5 --- Lack of Appropriate Speech Corpus --- p.50Chapter 3.5 --- References --- p.50Chapter Chapter 4 --- Data Collection --- p.53Chapter 4.1 --- Data Collection --- p.53Chapter 4.1.1 --- Corpus Design --- p.53Chapter 4.1.2 --- Recording Setup --- p.59Chapter 4.1.3 --- Post-processing of Speech Data --- p.60Chapter 4.2 --- A Baseline Database --- p.61Chapter 4.2.1 --- Monolingual Spoken Cantonese Speech Data (CUMIX) --- p.61Chapter 4.3 --- References --- p.61Chapter Chapter 5 --- System Design and Experimental Setup --- p.63Chapter 5.1 --- Overview of the Code-mixing Speech Recognizer --- p.63Chapter 5.1.1 --- Bilingual Syllable / Word-based Speech Recognizer --- p.63Chapter 5.1.2 --- Language Boundary Detection --- p.64Chapter 5.1.3 --- Generalized Word Posterior Probability (GWPP) --- p.65Chapter 5.2 --- Acoustic Modeling --- p.66Chapter 5.2.1 --- Speech Corpus for Training of Acoustic Models --- p.67Chapter 5.2.2 --- Features Extraction --- p.69Chapter 5.2.3 --- Variability in the Speech Signal --- p.69Chapter 5.2.4 --- Language Dependency of the Acoustic Models --- p.71Chapter 5.2.5 --- Pronunciation Dictionary --- p.80Chapter 5.2.6 --- The Training Process of Acoustic Models --- p.83Chapter 5.2.7 --- Decoding and Evaluation --- p.88Chapter 5.3 --- Language Modeling --- p.90Chapter 5.3.1 --- N-gram Language Model --- p.91Chapter 5.3.2 --- Difficulties in Data Collection --- p.91Chapter 5.3.3 --- Text Data for Training Language Model --- p.92Chapter 5.3.4 --- Training Tools --- p.95Chapter 5.3.5 --- Training Procedure --- p.95Chapter 5.3.6 --- Evaluation of the Language Models --- p.98Chapter 5.4 --- Language Boundary Detection --- p.99Chapter 5.4.1 --- Phone-based LBD --- p.100Chapter 5.4.2 --- Syllable-based LBD --- p.104Chapter 5.4.3 --- LBD Based on Syllable Lattice --- p.106Chapter 5.5 --- "Integration of the Acoustic Model Scores, Language Model Scores and Language Boundary Information" --- p.107Chapter 5.5.1 --- Integration of Acoustic Model Scores and Language Boundary Information. --- p.107Chapter 5.5.2 --- Integration of Modified Acoustic Model Scores and Language Model Scores --- p.109Chapter 5.5.3 --- Evaluation Criterion --- p.111Chapter 5.6 --- References --- p.112Chapter Chapter 6 --- Results and Analysis --- p.118Chapter 6.1 --- Speech Data for Development and Evaluation --- p.118Chapter 6.1.1 --- Development Data --- p.118Chapter 6.1.2 --- Testing Data --- p.118Chapter 6.2 --- Performance of Different Acoustic Units --- p.119Chapter 6.2.1 --- Analysis of Results --- p.120Chapter 6.3 --- Language Boundary Detection --- p.122Chapter 6.3.1 --- Phone-based Language Boundary Detection --- p.123Chapter 6.3.2 --- Syllable-based Language Boundary Detection (SYL LB) --- p.127Chapter 6.3.3 --- Language Boundary Detection Based on Syllable Lattice (BILINGUAL LBD) --- p.129Chapter 6.3.4 --- Observations --- p.129Chapter 6.4 --- Evaluation of the Language Models --- p.130Chapter 6.4.1 --- Character Perplexity --- p.130Chapter 6.4.2 --- Phonetic-to-text Conversion Rate --- p.131Chapter 6.4.3 --- Observations --- p.131Chapter 6.5 --- Character Error Rate --- p.132Chapter 6.5.1 --- Without Language Boundary Information --- p.133Chapter 6.5.2 --- With Language Boundary Detector SYL LBD --- p.134Chapter 6.5.3 --- With Language Boundary Detector BILINGUAL-LBD --- p.136Chapter 6.5.4 --- Observations --- p.138Chapter 6.6 --- References --- p.141Chapter Chapter 7 --- Conclusions and Suggestions for Future Work --- p.143Chapter 7.1 --- Conclusion --- p.143Chapter 7.1.1 --- Difficulties and Solutions --- p.144Chapter 7.2 --- Suggestions for Future Work --- p.149Chapter 7.2.1 --- Acoustic Modeling --- p.149Chapter 7.2.2 --- Pronunciation Modeling --- p.149Chapter 7.2.3 --- Language Modeling --- p.150Chapter 7.2.4 --- Speech Data --- p.150Chapter 7.2.5 --- Language Boundary Detection --- p.151Chapter 7.3 --- References --- p.151Appendix A Code-mixing Utterances in Training Set of CUMIX --- p.152Appendix B Code-mixing Utterances in Testing Set of CUMIX --- p.175Appendix C Usage of Speech Data in CUMIX --- p.20
    corecore