234 research outputs found
Inpainting of long audio segments with similarity graphs
We present a novel method for the compensation of long duration data loss in
audio signals, in particular music. The concealment of such signal defects is
based on a graph that encodes signal structure in terms of time-persistent
spectral similarity. A suitable candidate segment for the substitution of the
lost content is proposed by an intuitive optimization scheme and smoothly
inserted into the gap, i.e. the lost or distorted signal region. Extensive
listening tests show that the proposed algorithm provides highly promising
results when applied to a variety of real-world music signals
Introducing SPAIN (SParse Audio INpainter)
A novel sparsity-based algorithm for audio inpainting is proposed. It is an
adaptation of the SPADE algorithm by Kiti\'c et al., originally developed for
audio declipping, to the task of audio inpainting. The new SPAIN (SParse Audio
INpainter) comes in synthesis and analysis variants. Experiments show that both
A-SPAIN and S-SPAIN outperform other sparsity-based inpainting algorithms.
Moreover, A-SPAIN performs on a par with the state-of-the-art method based on
linear prediction in terms of the SNR, and, for larger gaps, SPAIN is even
slightly better in terms of the PEMO-Q psychoacoustic criterion
Deep speech inpainting of time-frequency masks
Transient loud intrusions, often occurring in noisy environments, can
completely overpower speech signal and lead to an inevitable loss of
information. While existing algorithms for noise suppression can yield
impressive results, their efficacy remains limited for very low signal-to-noise
ratios or when parts of the signal are missing. To address these limitations,
here we propose an end-to-end framework for speech inpainting, the
context-based retrieval of missing or severely distorted parts of
time-frequency representation of speech. The framework is based on a
convolutional U-Net trained via deep feature losses, obtained using speechVGG,
a deep speech feature extractor pre-trained on an auxiliary word classification
task. Our evaluation results demonstrate that the proposed framework can
recover large portions of missing or distorted time-frequency representation of
speech, up to 400 ms and 3.2 kHz in bandwidth. In particular, our approach
provided a substantial increase in STOI & PESQ objective metrics of the
initially corrupted speech samples. Notably, using deep feature losses to train
the framework led to the best results, as compared to conventional approaches.Comment: Accepted to InterSpeech202
Audio-Visual Speech Inpainting with Deep Learning
In this paper, we present a deep-learning-based framework for audio-visual
speech inpainting, i.e., the task of restoring the missing parts of an acoustic
speech signal from reliable audio context and uncorrupted visual information.
Recent work focuses solely on audio-only methods and generally aims at
inpainting music signals, which show highly different structure than speech.
Instead, we inpaint speech signals with gaps ranging from 100 ms to 1600 ms to
investigate the contribution that vision can provide for gaps of different
duration. We also experiment with a multi-task learning approach where a phone
recognition task is learned together with speech inpainting. Results show that
the performance of audio-only speech inpainting approaches degrades rapidly
when gaps get large, while the proposed audio-visual approach is able to
plausibly restore missing information. In addition, we show that multi-task
learning is effective, although the largest contribution to performance comes
from vision
Diffusion-Based Audio Inpainting
Audio inpainting aims to reconstruct missing segments in corrupted
recordings. Previous methods produce plausible reconstructions when the gap
length is shorter than about 100\;ms, but the quality decreases for longer
gaps. This paper explores recent advancements in deep learning and,
particularly, diffusion models, for the task of audio inpainting. The proposed
method uses an unconditionally trained generative model, which can be
conditioned in a zero-shot fashion for audio inpainting, offering high
flexibility to regenerate gaps of arbitrary length. An improved deep neural
network architecture based on the constant-Q transform, which allows the model
to exploit pitch-equivariant symmetries in audio, is also presented. The
performance of the proposed algorithm is evaluated through objective and
subjective metrics for the task of reconstructing short to mid-sized gaps. The
results of a formal listening test show that the proposed method delivers a
comparable performance against state-of-the-art for short gaps, while retaining
a good audio quality and outperforming the baselines for the longest gap
lengths tested, 150\;ms and 200\;ms. This work helps improve the restoration of
sound recordings having fairly long local disturbances or dropouts, which must
be reconstructed.Comment: Submitted for publication to the Journal of Audio Engineering Society
on January 30th, 202
- …