92 research outputs found
Multi-scale Multi-band DenseNets for Audio Source Separation
This paper deals with the problem of audio source separation. To handle the
complex and ill-posed nature of the problems of audio source separation, the
current state-of-the-art approaches employ deep neural networks to obtain
instrumental spectra from a mixture. In this study, we propose a novel network
architecture that extends the recently developed densely connected
convolutional network (DenseNet), which has shown excellent results on image
classification tasks. To deal with the specific problem of audio source
separation, an up-sampling layer, block skip connection and band-dedicated
dense blocks are incorporated on top of DenseNet. The proposed approach takes
advantage of long contextual information and outperforms state-of-the-art
results on SiSEC 2016 competition by a large margin in terms of
signal-to-distortion ratio. Moreover, the proposed architecture requires
significantly fewer parameters and considerably less training time compared
with other methods.Comment: to appear at WASPAA 201
PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network
Music creation is typically composed of two parts: composing the musical
score, and then performing the score with instruments to make sounds. While
recent work has made much progress in automatic music generation in the
symbolic domain, few attempts have been made to build an AI model that can
render realistic music audio from musical scores. Directly synthesizing audio
with sound sample libraries often leads to mechanical and deadpan results,
since musical scores do not contain performance-level information, such as
subtle changes in timing and dynamics. Moreover, while the task may sound like
a text-to-speech synthesis problem, there are fundamental differences since
music audio has rich polyphonic sounds. To build such an AI performer, we
propose in this paper a deep convolutional model that learns in an end-to-end
manner the score-to-audio mapping between a symbolic representation of music
called the piano rolls and an audio representation of music called the
spectrograms. The model consists of two subnets: the ContourNet, which uses a
U-Net structure to learn the correspondence between piano rolls and
spectrograms and to give an initial result; and the TextureNet, which further
uses a multi-band residual network to refine the result by adding the spectral
texture of overtones and timbre. We train the model to generate music clips of
the violin, cello, and flute, with a dataset of moderate size. We also present
the result of a user study that shows our model achieves higher mean opinion
score (MOS) in naturalness and emotional expressivity than a WaveNet-based
model and two commercial sound libraries. We open our source code at
https://github.com/bwang514/PerformanceNetComment: 8 pages, 6 figures, AAAI 2019 camera-ready versio
Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask
Singing voice separation based on deep learning relies on the usage of
time-frequency masking. In many cases the masking process is not a learnable
function or is not encapsulated into the deep learning optimization.
Consequently, most of the existing methods rely on a post processing step using
the generalized Wiener filtering. This work proposes a method that learns and
optimizes (during training) a source-dependent mask and does not need the
aforementioned post processing step. We introduce a recurrent inference
algorithm, a sparse transformation step to improve the mask generation process,
and a learned denoising filter. Obtained results show an increase of 0.49 dB
for the signal to distortion ratio and 0.30 dB for the signal to interference
ratio, compared to previous state-of-the-art approaches for monaural singing
voice separation
Self-Supervised Music Source Separation Using Vector-Quantized Source Category Estimates
Music source separation is focused on extracting distinct sonic elements from
composite tracks. Historically, many methods have been grounded in supervised
learning, necessitating labeled data, which is occasionally constrained in its
diversity. More recent methods have delved into N-shot techniques that utilize
one or more audio samples to aid in the separation. However, a challenge with
some of these methods is the necessity for an audio query during inference,
making them less suited for genres with varied timbres and effects. This paper
offers a proof-of-concept for a self-supervised music source separation system
that eliminates the need for audio queries at inference time. In the training
phase, while it adopts a query-based approach, we introduce a modification by
substituting the continuous embedding of query audios with Vector Quantized
(VQ) representations. Trained end-to-end with up to N classes as determined by
the VQ's codebook size, the model seeks to effectively categorise instrument
classes. During inference, the input is partitioned into N sources, with some
potentially left unutilized based on the mix's instrument makeup. This
methodology suggests an alternative avenue for considering source separation
across diverse music genres. We provide examples and additional results online.Comment: 4 pages, 2 figures, 1 table; Accepted at the 37th Conference on
Neural Information Processing Systems (2023), Machine Learning for Audio
Worksho
- …