1,682 research outputs found

    Speech Synthesis Based on Hidden Markov Models

    Get PDF

    Large-vocabulary speaker-independent continuous speech recognition with semi-continuous hidden Markov models

    Get PDF
    A semi-continuous hidden Markov model based on the muluple vector quantization codebooks is used here for large.vocabulary speaker-independent continuous speech recognition in the techn,ques employed here. the semi-continuous output probab~hty densHy function for each codebook is represented by a comhinat,on of the corre,~ponding discrete output probablhttes of the hidden Markov model end the continuous Gauss,an den. stay functions of each individual codebook. Parameters of vec. tor qusnttzation codebook and hidden Markov model are mutuully optimized to achJeve an optimal model'codebook comb,nation under a untried probab,hshc framework Another advantages of thts approach is the enhanced robustness of the semi. continuous output probability by the combination of multiple codewords and multtple codebooks For a 1000.word speakermdependen

    Evaluation of preprocessors for neural network speaker verification

    Get PDF

    Wavenet based low rate speech coding

    Full text link
    Traditional parametric coding of speech facilitates low rate but provides poor reconstruction quality because of the inadequacy of the model used. We describe how a WaveNet generative speech model can be used to generate high quality speech from the bit stream of a standard parametric coder operating at 2.4 kb/s. We compare this parametric coder with a waveform coder based on the same generative model and show that approximating the signal waveform incurs a large rate penalty. Our experiments confirm the high performance of the WaveNet based coder and show that the speech produced by the system is able to additionally perform implicit bandwidth extension and does not significantly impair recognition of the original speaker for the human listener, even when that speaker has not been used during the training of the generative model.Comment: 5 pages, 2 figure
    corecore