2 research outputs found
Acoustic Modeling for Automatic Lyrics-to-Audio Alignment
Automatic lyrics to polyphonic audio alignment is a challenging task not only
because the vocals are corrupted by background music, but also there is a lack
of annotated polyphonic corpus for effective acoustic modeling. In this work,
we propose (1) using additional speech and music-informed features and (2)
adapting the acoustic models trained on a large amount of solo singing vocals
towards polyphonic music using a small amount of in-domain data. Incorporating
additional information such as voicing and auditory features together with
conventional acoustic features aims to bring robustness against the increased
spectro-temporal variations in singing vocals. By adapting the acoustic model
using a small amount of polyphonic audio data, we reduce the domain mismatch
between training and testing data. We perform several alignment experiments and
present an in-depth alignment error analysis on acoustic features, and model
adaptation techniques. The results demonstrate that the proposed strategy
provides a significant error reduction of word boundary alignment over
comparable existing systems, especially on more challenging polyphonic data
with long-duration musical interludes.Comment: Accepted for publication at Interspeech 201
DeepSinger: Singing Voice Synthesis with Data Mined From the Web
In this paper, we develop DeepSinger, a multi-lingual multi-singer singing
voice synthesis (SVS) system, which is built from scratch using singing
training data mined from music websites. The pipeline of DeepSinger consists of
several steps, including data crawling, singing and accompaniment separation,
lyrics-to-singing alignment, data filtration, and singing modeling.
Specifically, we design a lyrics-to-singing alignment model to automatically
extract the duration of each phoneme in lyrics starting from coarse-grained
sentence level to fine-grained phoneme level, and further design a
multi-lingual multi-singer singing model based on a feed-forward Transformer to
directly generate linear-spectrograms from lyrics, and synthesize voices using
Griffin-Lim. DeepSinger has several advantages over previous SVS systems: 1) to
the best of our knowledge, it is the first SVS system that directly mines
training data from music websites, 2) the lyrics-to-singing alignment model
further avoids any human efforts for alignment labeling and greatly reduces
labeling cost, 3) the singing model based on a feed-forward Transformer is
simple and efficient, by removing the complicated acoustic feature modeling in
parametric synthesis and leveraging a reference encoder to capture the timbre
of a singer from noisy singing data, and 4) it can synthesize singing voices in
multiple languages and multiple singers. We evaluate DeepSinger on our mined
singing dataset that consists of about 92 hours data from 89 singers on three
languages (Chinese, Cantonese and English). The results demonstrate that with
the singing data purely mined from the Web, DeepSinger can synthesize
high-quality singing voices in terms of both pitch accuracy and voice
naturalness (footnote: Our audio samples are shown in
https://speechresearch.github.io/deepsinger/.)Comment: Accepted by KDD2020 research trac