35 research outputs found

    High-density discrete HMM with the use of scalar quantization indexing

    No full text
    With the advance in semiconductor memory and the availability of very large speech corpora (of hundreds to thousands of hours of speech), we would like to revisit the use of discrete hidden Markov model (DHMM) in automatic speech recognition. To estimate the discrete density in a DHMM state, the acoustic space is divided into bins and one simply count the relative amount of observations falling into each bin. With a very large speech corpus, we believe that the number of bins may be greatly increased to get a much higher density than before, and we will call the new models, the high-density discrete hidden Markov model (HDDHMM). Our HDDHMM is different from traditional DHMM in two aspects: firstly, the codebook will have a size in thousands or even tens of thousands; secondly, we propose a method based on scalar quantization indexing so that for a d-dimensional acoustic vector, the discrete codeword can be determined in O(d) time. During recognition, the state probability is reduced to an O(1) table look-up. The new HDDHMM was tested on WSJO with 5K vocabulary. Compared with a baseline 4-stream continuous density HMM system which has a WER of 9.71 %, a 4-stream HDDHMM system converted from the former achieves a WER of 11.60%, with no distance or Gaussian computation
    corecore