Article thumbnail

End-to-end visual speech recognition with LSTMS

By Stavros Petridis, Zuwei Li and Maja Pantic

Abstract

Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end

Topics: Deep Networks, End-to-End Training, Lipreading, Long-Short Term Recurrent Neural Networks, Visual Speech Recognition, Software, Signal Processing, Electrical and Electronic Engineering
Publisher: IEEE
Year: 2017
DOI identifier: 10.1109/icassp.2017.7952625
OAI identifier:
Provided by: NARCIS
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://www.loc.gov/mods/v3 (external link)
  • https://ris.utwente.nl/ws/oai (external link)
  • https://research.utwente.nl/en... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.