Four different visual speech parameterisation methods are compared on a large vocabulary, continuous, audio-visual speech recognition task using the IBM ViaVoice TM audio-visual speech database. Three are direct mouth image region based transforms; discrete cosine and wavelet transforms, and principal component analysis. The fourth uses a statistical model of shape and appearance called an active appearance model, to track and obtain model parameters describing the entire face. All parameterisations are compared experimentally using hidden Markov models (HMM’s) in a speaker independent test. Visualonly HMM’s are used to rescore lattices obtained from audio models trained in noisy conditions. 1
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.