Location of Repository

Performing speech recognition on multiple parallel files using continuous hidden Markov models on an FPGA

By Stephen Jonathan Melnikoff, Steven Francis Quigley and Martin Russell

Abstract

Speech recognition is a computationally demanding task, Particularly the stages which use Viterbi decoding for converting pre-processed speech data into words or subword unit, and the associated observation probability calculations, which employ multivariate Gaussian distributions; so any device that can reduce the load on, for example, a PC's processor, is advantageous. Hence we present two implementations of a speech recognition system incorporating an FPGA, employing continuous hidden Markov models (HMMs), and capable of processing three speech files simultaneously. The first uses monophones, and can perform recognition 250 times real time (in terms of average time per observation), as well as outperforming its software equivalent. The second uses biphones and triphones, reducing the speedup to 13 times real time

Topics: TK Electrical engineering. Electronics Nuclear engineering, QA75 Electronic computers. Computer science
Publisher: IEEE
Year: 2002
OAI identifier: oai:eprints.bham.ac.uk:27
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://eprints.bham.ac.uk/27/ (external link)
  • http://ieeexplore.ieee.org/iel... (external link)
  • http://eprints.bham.ac.uk/27/1... (external link)
  • http://eprints.bham.ac.uk/27/1... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.