In conventional studies on environmental sound separation and synthesis using
captions, datasets consisting of multiple-source sounds with their captions
were used for model training. However, when we collect the captions for
multiple-source sound, it is not easy to collect detailed captions for each
sound source, such as the number of sound occurrences and timbre. Therefore, it
is difficult to extract only the single-source target sound by the
model-training method using a conventional captioned sound dataset. In this
work, we constructed a dataset with captions for a single-source sound named
CAPTDURE, which can be used in various tasks such as environmental sound
separation and synthesis. Our dataset consists of 1,044 sounds and 4,902
captions. We evaluated the performance of environmental sound extraction using
our dataset. The experimental results show that the captions for single-source
sounds are effective in extracting only the single-source target sound from the
mixture sound.Comment: Accepted to INTERSPEECH202