69 research outputs found
Environmental Sound Classification with Parallel Temporal-spectral Attention
Convolutional neural networks (CNN) are one of the best-performing neural
network architectures for environmental sound classification (ESC). Recently,
temporal attention mechanisms have been used in CNN to capture the useful
information from the relevant time frames for audio classification, especially
for weakly labelled data where the onset and offset times of the sound events
are not applied. In these methods, however, the inherent spectral
characteristics and variations are not explicitly exploited when obtaining the
deep features. In this paper, we propose a novel parallel temporal-spectral
attention mechanism for CNN to learn discriminative sound representations,
which enhances the temporal and spectral features by capturing the importance
of different time frames and frequency bands. Parallel branches are constructed
to allow temporal attention and spectral attention to be applied respectively
in order to mitigate interference from the segments without the presence of
sound events. The experiments on three environmental sound classification (ESC)
datasets and two acoustic scene classification (ASC) datasets show that our
method improves the classification performance and also exhibits robustness to
noise.Comment: submitted to INTERSPEECH202
SpecAugment++: A Hidden Space Data Augmentation Method for Acoustic Scene Classification
In this paper, we present SpecAugment++, a novel data augmentation method for
deep neural networks based acoustic scene classification (ASC). Different from
other popular data augmentation methods such as SpecAugment and mixup that only
work on the input space, SpecAugment++ is applied to both the input space and
the hidden space of the deep neural networks to enhance the input and the
intermediate feature representations. For an intermediate hidden state, the
augmentation techniques consist of masking blocks of frequency channels and
masking blocks of time frames, which improve generalization by enabling a model
to attend not only to the most discriminative parts of the feature, but also
the entire parts. Apart from using zeros for masking, we also examine two
approaches for masking based on the use of other samples within the minibatch,
which helps introduce noises to the networks to make them more discriminative
for classification. The experimental results on the DCASE 2018 Task1 dataset
and DCASE 2019 Task1 dataset show that our proposed method can obtain 3.6% and
4.7% accuracy gains over a strong baseline without augmentation (i.e.
CP-ResNet) respectively, and outperforms other previous data augmentation
methods.Comment: Submitted to Interspeech 202
A Global-local Attention Framework for Weakly Labelled Audio Tagging
Weakly labelled audio tagging aims to predict the classes of sound events
within an audio clip, where the onset and offset times of the sound events are
not provided. Previous works have used the multiple instance learning (MIL)
framework, and exploited the information of the whole audio clip by MIL pooling
functions. However, the detailed information of sound events such as their
durations may not be considered under this framework. To address this issue, we
propose a novel two-stream framework for audio tagging by exploiting the global
and local information of sound events. The global stream aims to analyze the
whole audio clip in order to capture the local clips that need to be attended
using a class-wise selection module. These clips are then fed to the local
stream to exploit the detailed information for a better decision. Experimental
results on the AudioSet show that our proposed method can significantly improve
the performance of audio tagging under different baseline network
architectures.Comment: Accepted to ICASSP202
A Two-student Learning Framework for Mixed Supervised Target Sound Detection
Target sound detection (TSD) aims to detect the target sound from mixture
audio given the reference information. Previous work shows that a good
detection performance relies on fully-annotated data. However, collecting
fully-annotated data is labor-extensive. Therefore, we consider TSD with mixed
supervision, which learns novel categories (target domain) using weak
annotations with the help of full annotations of existing base categories
(source domain). We propose a novel two-student learning framework, which
contains two mutual helping student models ( and
) that learn from fully- and weakly-annotated datasets,
respectively. Specifically, we first propose a frame-level knowledge
distillation strategy to transfer the class-agnostic knowledge from
to . After that, a pseudo supervised
(PS) training is designed to transfer the knowledge from
to . Lastly, an adversarial training strategy is proposed,
which aims to align the data distribution between source and target domains. To
evaluate our method, we build three TSD datasets based on UrbanSound and
Audioset. Experimental results show that our methods offer about 8\%
improvement in event-based F score.Comment: submitted to interspeech202
NoreSpeech: Knowledge Distillation based Conditional Diffusion Model for Noise-robust Expressive TTS
Expressive text-to-speech (TTS) can synthesize a new speaking style by
imiating prosody and timbre from a reference audio, which faces the following
challenges: (1) The highly dynamic prosody information in the reference audio
is difficult to extract, especially, when the reference audio contains
background noise. (2) The TTS systems should have good generalization for
unseen speaking styles. In this paper, we present a
\textbf{no}ise-\textbf{r}obust \textbf{e}xpressive TTS model (NoreSpeech),
which can robustly transfer speaking style in a noisy reference utterance to
synthesized speech. Specifically, our NoreSpeech includes several components:
(1) a novel DiffStyle module, which leverages powerful probabilistic denoising
diffusion models to learn noise-agnostic speaking style features from a teacher
model by knowledge distillation; (2) a VQ-VAE block, which maps the style
features into a controllable quantized latent space for improving the
generalization of style transfer; and (3) a straight-forward but effective
parameter-free text-style alignment module, which enables NoreSpeech to
transfer style to a textual input from a length-mismatched reference utterance.
Experiments demonstrate that NoreSpeech is more effective than previous
expressive TTS models in noise environments. Audio samples and code are
available at:
\href{http://dongchaoyang.top/NoreSpeech\_demo/}{http://dongchaoyang.top/NoreSpeech\_demo/}Comment: Submitted to ICASSP202
- …