18 research outputs found
Effects of Lombard Reflex on the Performance of Deep-Learning-Based Audio-Visual Speech Enhancement Systems
Humans tend to change their way of speaking when they are immersed in a noisy
environment, a reflex known as Lombard effect. Current speech enhancement
systems based on deep learning do not usually take into account this change in
the speaking style, because they are trained with neutral (non-Lombard) speech
utterances recorded under quiet conditions to which noise is artificially
added. In this paper, we investigate the effects that the Lombard reflex has on
the performance of audio-visual speech enhancement systems based on deep
learning. The results show that a gap in the performance of as much as
approximately 5 dB between the systems trained on neutral speech and the ones
trained on Lombard speech exists. This indicates the benefit of taking into
account the mismatch between neutral and Lombard speech in the design of
audio-visual speech enhancement systems
On Training Targets and Objective Functions for Deep-Learning-Based Audio-Visual Speech Enhancement
Audio-visual speech enhancement (AV-SE) is the task of improving speech
quality and intelligibility in a noisy environment using audio and visual
information from a talker. Recently, deep learning techniques have been adopted
to solve the AV-SE task in a supervised manner. In this context, the choice of
the target, i.e. the quantity to be estimated, and the objective function,
which quantifies the quality of this estimate, to be used for training is
critical for the performance. This work is the first that presents an
experimental study of a range of different targets and objective functions used
to train a deep-learning-based AV-SE system. The results show that the
approaches that directly estimate a mask perform the best overall in terms of
estimated speech quality and intelligibility, although the model that directly
estimates the log magnitude spectrum performs as good in terms of estimated
speech quality
Audio-Visual Speech Inpainting with Deep Learning
In this paper, we present a deep-learning-based framework for audio-visual
speech inpainting, i.e., the task of restoring the missing parts of an acoustic
speech signal from reliable audio context and uncorrupted visual information.
Recent work focuses solely on audio-only methods and generally aims at
inpainting music signals, which show highly different structure than speech.
Instead, we inpaint speech signals with gaps ranging from 100 ms to 1600 ms to
investigate the contribution that vision can provide for gaps of different
duration. We also experiment with a multi-task learning approach where a phone
recognition task is learned together with speech inpainting. Results show that
the performance of audio-only speech inpainting approaches degrades rapidly
when gaps get large, while the proposed audio-visual approach is able to
plausibly restore missing information. In addition, we show that multi-task
learning is effective, although the largest contribution to performance comes
from vision
Speech inpainting: Context-based speech synthesis guided by video
Audio and visual modalities are inherently connected in speech signals: lip
movements and facial expressions are correlated with speech sounds. This
motivates studies that incorporate the visual modality to enhance an acoustic
speech signal or even restore missing audio information. Specifically, this
paper focuses on the problem of audio-visual speech inpainting, which is the
task of synthesizing the speech in a corrupted audio segment in a way that it
is consistent with the corresponding visual content and the uncorrupted audio
context. We present an audio-visual transformer-based deep learning model that
leverages visual cues that provide information about the content of the
corrupted audio. It outperforms the previous state-of-the-art audio-visual
model and audio-only baselines. We also show how visual features extracted with
AV-HuBERT, a large audio-visual transformer for speech recognition, are
suitable for synthesizing speech.Comment: Accepted in Interspeech2
Vocoder-Based Speech Synthesis from Silent Videos
Both acoustic and visual information influence human perception of speech.
For this reason, the lack of audio in a video sequence determines an extremely
low speech intelligibility for untrained lip readers. In this paper, we present
a way to synthesise speech from the silent video of a talker using deep
learning. The system learns a mapping function from raw video frames to
acoustic features and reconstructs the speech with a vocoder synthesis
algorithm. To improve speech reconstruction performance, our model is also
trained to predict text information in a multi-task learning fashion and it is
able to simultaneously reconstruct and recognise speech in real time. The
results in terms of estimated speech quality and intelligibility show the
effectiveness of our method, which exhibits an improvement over existing
video-to-speech approaches.Comment: Accepted to Interspeech 202
An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and Separation
Speech enhancement and speech separation are two related tasks, whose purpose
is to extract either one or more target speech signals, respectively, from a
mixture of sounds generated by several sources. Traditionally, these tasks have
been tackled using signal processing and machine learning techniques applied to
the available acoustic signals. Since the visual aspect of speech is
essentially unaffected by the acoustic environment, visual information from the
target speakers, such as lip movements and facial expressions, has also been
used for speech enhancement and speech separation systems. In order to
efficiently fuse acoustic and visual information, researchers have exploited
the flexibility of data-driven approaches, specifically deep learning,
achieving strong performance. The ceaseless proposal of a large number of
techniques to extract features and fuse multimodal information has highlighted
the need for an overview that comprehensively describes and discusses
audio-visual speech enhancement and separation based on deep learning. In this
paper, we provide a systematic survey of this research topic, focusing on the
main elements that characterise the systems in the literature: acoustic
features; visual features; deep learning methods; fusion techniques; training
targets and objective functions. In addition, we review deep-learning-based
methods for speech reconstruction from silent videos and audio-visual sound
source separation for non-speech signals, since these methods can be more or
less directly applied to audio-visual speech enhancement and separation.
Finally, we survey commonly employed audio-visual speech datasets, given their
central role in the development of data-driven approaches, and evaluation
methods, because they are generally used to compare different systems and
determine their performance