136 research outputs found
Visually Indicated Sounds
Objects make distinctive sounds when they are hit or scratched. These sounds
reveal aspects of an object's material properties, as well as the actions that
produced them. In this paper, we propose the task of predicting what sound an
object makes when struck as a way of studying physical interactions within a
visual scene. We present an algorithm that synthesizes sound from silent videos
of people hitting and scratching objects with a drumstick. This algorithm uses
a recurrent neural network to predict sound features from videos and then
produces a waveform from these features with an example-based synthesis
procedure. We show that the sounds predicted by our model are realistic enough
to fool participants in a "real or fake" psychophysical experiment, and that
they convey significant information about material properties and physical
interactions
Vid2speech: Speech Reconstruction from Silent Video
Speechreading is a notoriously difficult task for humans to perform. In this
paper we present an end-to-end model based on a convolutional neural network
(CNN) for generating an intelligible acoustic speech signal from silent video
frames of a speaking person. The proposed CNN generates sound features for each
frame based on its neighboring frames. Waveforms are then synthesized from the
learned speech features to produce intelligible speech. We show that by
leveraging the automatic feature learning capabilities of a CNN, we can obtain
state-of-the-art word intelligibility on the GRID dataset, and show promising
results for learning out-of-vocabulary (OOV) words.Comment: Accepted for publication at ICASSP 201
Seeing Through Noise: Visually Driven Speaker Separation and Enhancement
Isolating the voice of a specific person while filtering out other voices or
background noises is challenging when video is shot in noisy environments. We
propose audio-visual methods to isolate the voice of a single speaker and
eliminate unrelated sounds. First, face motions captured in the video are used
to estimate the speaker's voice, by passing the silent video frames through a
video-to-speech neural network-based model. Then the speech predictions are
applied as a filter on the noisy input audio. This approach avoids using
mixtures of sounds in the learning process, as the number of such possible
mixtures is huge, and would inevitably bias the trained model. We evaluate our
method on two audio-visual datasets, GRID and TCD-TIMIT, and show that our
method attains significant SDR and PESQ improvements over the raw
video-to-speech predictions, and a well-known audio-only method.Comment: Supplementary video: https://www.youtube.com/watch?v=qmsyj7vAzo
Visual Speech Enhancement
When video is shot in noisy environment, the voice of a speaker seen in the
video can be enhanced using the visible mouth movements, reducing background
noise. While most existing methods use audio-only inputs, improved performance
is obtained with our visual speech enhancement, based on an audio-visual neural
network. We include in the training data videos to which we added the voice of
the target speaker as background noise. Since the audio input is not sufficient
to separate the voice of a speaker from his own voice, the trained model better
exploits the visual input and generalizes well to different noise types. The
proposed model outperforms prior audio visual methods on two public lipreading
datasets. It is also the first to be demonstrated on a dataset not designed for
lipreading, such as the weekly addresses of Barack Obama.Comment: Accepted to Interspeech 2018. Supplementary video:
https://www.youtube.com/watch?v=nyYarDGpcY
You said that?
We present a method for generating a video of a talking face. The method
takes as inputs: (i) still images of the target face, and (ii) an audio speech
segment; and outputs a video of the target face lip synched with the audio. The
method runs in real time and is applicable to faces and audio not seen at
training time.
To achieve this we propose an encoder-decoder CNN model that uses a joint
embedding of the face and audio to generate synthesised talking face video
frames. The model is trained on tens of hours of unlabelled videos.
We also show results of re-dubbing videos using speech from a different
person.Comment: https://youtu.be/LeufDSb15Kc British Machine Vision Conference
(BMVC), 201
Lip2AudSpec: Speech reconstruction from silent lip movements video
In this study, we propose a deep neural network for reconstructing
intelligible speech from silent lip movement videos. We use auditory
spectrogram as spectral representation of speech and its corresponding sound
generation method resulting in a more natural sounding reconstructed speech.
Our proposed network consists of an autoencoder to extract bottleneck features
from the auditory spectrogram which is then used as target to our main lip
reading network comprising of CNN, LSTM and fully connected layers. Our
experiments show that the autoencoder is able to reconstruct the original
auditory spectrogram with a 98% correlation and also improves the quality of
reconstructed speech from the main lip reading network. Our model, trained
jointly on different speakers is able to extract individual speaker
characteristics and gives promising results of reconstructing intelligible
speech with superior word recognition accuracy
- …