234 research outputs found
Lip2AudSpec: Speech reconstruction from silent lip movements video
In this study, we propose a deep neural network for reconstructing
intelligible speech from silent lip movement videos. We use auditory
spectrogram as spectral representation of speech and its corresponding sound
generation method resulting in a more natural sounding reconstructed speech.
Our proposed network consists of an autoencoder to extract bottleneck features
from the auditory spectrogram which is then used as target to our main lip
reading network comprising of CNN, LSTM and fully connected layers. Our
experiments show that the autoencoder is able to reconstruct the original
auditory spectrogram with a 98% correlation and also improves the quality of
reconstructed speech from the main lip reading network. Our model, trained
jointly on different speakers is able to extract individual speaker
characteristics and gives promising results of reconstructing intelligible
speech with superior word recognition accuracy
Lip-Listening: Mixing Senses to Understand Lips using Cross Modality Knowledge Distillation for Word-Based Models
In this work, we propose a technique to transfer speech recognition
capabilities from audio speech recognition systems to visual speech
recognizers, where our goal is to utilize audio data during lipreading model
training. Impressive progress in the domain of speech recognition has been
exhibited by audio and audio-visual systems. Nevertheless, there is still much
to be explored with regards to visual speech recognition systems due to the
visual ambiguity of some phonemes. To this end, the development of visual
speech recognition models is crucial given the instability of audio models. The
main contributions of this work are i) building on recent state-of-the-art
word-based lipreading models by integrating sequence-level and frame-level
Knowledge Distillation (KD) to their systems; ii) leveraging audio data during
training visual models, a feat which has not been utilized in prior word-based
work; iii) proposing the Gaussian-shaped averaging in frame-level KD, as an
efficient technique that aids the model in distilling knowledge at the sequence
model encoder. This work proposes a novel and competitive architecture for
lip-reading, as we demonstrate a noticeable improvement in performance, setting
a new benchmark equals to 88.64% on the LRW dataset.Comment: arXiv admin note: text overlap with arXiv:2108.0354
Final Report to NSF of the Standards for Facial Animation Workshop
The human face is an important and complex communication channel. It is a very familiar and sensitive object of human perception. The facial animation field has increased greatly in the past few years as fast computer graphics workstations have made the modeling and real-time animation of hundreds of thousands of polygons affordable and almost commonplace. Many applications have been developed such as teleconferencing, surgery, information assistance systems, games, and entertainment. To solve these different problems, different approaches for both animation control and modeling have been developed
- …