119 research outputs found

    A Bandpass Transform for Speaker Normalization

    Get PDF
    One of the major challenges for Automatic Speech Recognition is to handle speech variability. Inter-speaker variability is partly due to differences in speakers' anatomy and especially in their Vocal Tract geometry. Dissimilarities in Vocal Tract Length (VTL) are a known source of speech variation. Vocal Tract Length Normalization is a popular Speaker Normalization technique that can be implemented as a transformation of a spectrum frequency axis. We introduce in this document a new spectral transformation for Speaker Normalization. We use the Bilinear Transformation to introduce a new frequency warping resulting from a mapping of a prototype Band-Pass (BP) filter into a general BP filter. This new transformation called the Bandpass Transformation (BPT) offers two degrees of freedom enabling complex warpings of the frequency axis that are different from previous works with the Bilinear Transform. We then define a procedure to use BPT for Speaker Normalization based on the Nelder-Mead algorithm for the estimation of the BPT parameters. We present a detailed study of the performance of our new approach on two test sets with gender dependent and independent systems. Our results demonstrate clear improvements compared to standard methods used in VTL Normalization. A score compensation procedure is presented and results in further improvements of our results by refining our BPT parameter estimation

    Perkeptuaalinen spektrisovitus glottisherätevokoodatussa tilastollisessa parametrisessa puhesynteesissä käyttäen mel-suodinpankkia

    Get PDF
    This thesis presents a novel perceptual spectral matching technique for parametric statistical speech synthesis with glottal vocoding. The proposed method utilizes a perceptual matching criterion based on mel-scale filterbanks. The background section discusses the physiology and modelling of human speech production and perception, necessary for speech synthesis and perceptual spectral matching. Additionally, the working principles of statistical parametric speech synthesis and the baseline glottal source excited vocoder are described. The proposed method is evaluated by comparing it to the baseline method first by an objective measure based on the mel-cepstral distance, and second by a subjective listening test. The novel method was found to give comparable performance to the baseline spectral matching method of the glottal vocoder.Tämä työ esittää uuden perkeptuaalisen spektrisovitustekniikan glottisvokoodattua tilastollista parametristä puhesynteesiä varten. Ehdotettu menetelmä käyttää mel-suodinpankkeihin perustuvaa perkeptuaalista sovituskriteeriä. Työn taustaosuus käsittelee ihmisen puheentuoton ja havaitsemisen fysiologiaa ja mallintamista tilastollisen parametrisen puhesynteesin ja perkeptuaalisen spektrisovituksen näkökulmasta. Lisäksi kuvataan tilastollisen parametrisen puhesynteesin ja perusmuotoisen glottisherätevokooderin toimintaperiaatteet. Uutta menetelmää arvioidaan vertaamalla sitä alkuperäiseen metodiin ensin käyttämällä mel-kepstrikertoimia käyttävää objektiivista etäisyysmittaa ja toiseksi käyttäen subjektiivisia kuuntelukokeita. Uuden metodin havaittiin olevan laadullisesti samalla tasolla alkuperäisen spektrisovitusmenetelmän kanssa

    Adaptation of children’s speech with limited data based on formant-like peak alignment,”

    Get PDF
    Abstract Automatic recognition of children's speech using acoustic models trained by adults results in poor performance due to differences in speech acoustics. These acoustical differences are a consequence of children having shorter vocal tracts and smaller vocal cords than adults. Hence, speaker adaptation needs to be performed. However, in real-world applications, the amount of adaptation data available may be less than what is needed by common speaker adaptation techniques to yield reasonable performance. In this paper, we first study, in the discrete frequency domain, the relationship between frequency warping in the front-end and corresponding transformations in the back-end. Three common feature extraction schemes are investigated and their transformation linearity in the back-end are discussed. In particular, we show that under certain approximations, frequency warping of MFCC features with Mel-warped triangular filter banks equals a linear transformation in the cepstral space. Based on that linear transformation, a formant-like peak alignment algorithm is proposed to adapt adult acoustic models to children's speech. The peaks are estimated by Gaussian mixtures using the Expectation-Maximization (EM) algorith

    Improving Automatic Speech Recognition on Endangered Languages

    Get PDF
    As the world moves towards a more globalized scenario, it has brought along with it the extinction of several languages. It has been estimated that over the next century, over half of the world\u27s languages will be extinct, and an alarming 43% of the world\u27s languages are at different levels of endangerment or extinction already. The survival of many of these languages depends on the pressure imposed on the dwindling speakers of these languages. Often there is a strong correlation between endangered languages and the number and quality of recordings and documentations of each. But why do we care about preserving these less prevalent languages? The behavior of cultures is often expressed in the form of speech via one\u27s native language. The memories, ideas, major events, practices, cultures and lessons learnt, both individual as well as the community\u27s, are all communicated to the outside world via language. So, language preservation is crucial to understanding the behavior of these communities. Deep learning models have been shown to dramatically improve speech recognition accuracy but require large amounts of labelled data. Unfortunately, resource constrained languages typically fall short of the necessary data for successful training. To help alleviate the problem, data augmentation techniques fabricate many new samples from each sample. The aim of this master\u27s thesis is to examine the effect of different augmentation techniques on speech recognition of resource constrained languages. The augmentation methods being experimented with are noise augmentation, pitch augmentation, speed augmentation as well as voice transformation augmentation using Generative Adversarial Networks (GANs). This thesis also examines the effectiveness of GANs in voice transformation and its limitations. The information gained from this study will further augment the collection of data, specifically, in understanding the conditions required for the data to be collected in, so that GANs can effectively perform voice transformation. Training of the original data on the Deep Speech model resulted in 95.03% WER. Training the Seneca data on a Deep Speech model that was pretrained on an English dataset, reduced the WER to 70.43%. On adding 15 augmented samples per sample, the WER reduced to 68.33%. Finally, adding 25 augmented samples per sample, the WER reduced to 48.23%. Experiments to find the best augmentation method among noise addition, pitch variation, speed variation augmentation and GAN augmentation revealed that GAN augmentation performed the best, with a WER reduction to 60.03%

    A Comprehensive Review on Speech Recognition and Its Techniques

    Get PDF
    Abstract: This paper provides a review on speech recognition system and its techniques. And provide the advancement in the field of speech recognition system. As speech is a way for the communication between the sender and receiver. A speech recognition system takes speech signal as the input and gives the output in the form of text. This paper describes the basic Automatic Speech Recognition (ASR) System. Provide various Speech recognition techniques such as speech analysis, feature extraction techniques, and matching techniques. This paper gives brief description of feature extraction techniques such as Linear Prediction coding (LPC), Mel frequency Cepstral coefficient (MFCC) and Perceptual Linear Predictive (PLP) technique

    Hierachical methods for large population speaker identification using telephone speech

    Get PDF
    This study focuses on speaker identificat ion. Several problems such as acoustic noise, channel noise, speaker variability, large population of known group of speakers wi thin the system and many others limit good SiD performance. The SiD system extracts speaker specific features from digitised speech signa] for accurate identification. These feature sets are clustered to form the speaker template known as a speaker model. As the number of speakers enrolling into the system gets larger, more models accumulate and the interspeaker confusion results. This study proposes the hierarchical methods which aim to split the large population of enrolled speakers into smaller groups of model databases for minimising interspeaker confusion

    Spoken Word Recognition Using Hidden Markov Model

    Get PDF
    The main aim of this project is to develop isolated spoken word recognition system using Hidden Markov Model (HMM) with a good accuracy at all the possible frequency range of human voice. Here ten different words are recorded by different speakers including male and female and results are compared with different feature extraction methods. Earlier work includes recognition of seven small utterances using HMM with the use only one feature extraction method. This spoken word recognition system mainly divided into two major blocks. First includes recording data base and feature extraction of recorded signals. Here we use Mel frequency cepstral coefficients, linear cepstral coefficients and fundamental frequency as feature extraction methods. To obtain Mel frequency cepstral coefficients signal should go through the following: pre emphasis, framing, applying window function, Fast Fourier transform, filter bank and then discrete cosine transform, where as a linear frequency cepstral coefficients does not use Mel frequency. Second part describes HMM used for modeling and recognizing the spoken words. All the raining samples are clustered using K-means algorithm. Gaussian mixture containing mean, variance and weight are modeling parameters. Here Baum Welch algorithm is used for training the samples and re-estimate the parameters. Finally Viterbi algorithm recognizes best sequence that exactly matches for given sequence there is given spoken utterance to be recognized. Here all the simulations are done by the MATLAB tool and Microsoft window 7 operating system

    Speaker Recognition

    Get PDF
    corecore