115 research outputs found
Sample Mixed-Based Data Augmentation for Domestic Audio Tagging
Audio tagging has attracted increasing attention since last decade and has
various potential applications in many fields. The objective of audio tagging
is to predict the labels of an audio clip. Recently deep learning methods have
been applied to audio tagging and have achieved state-of-the-art performance,
which provides a poor generalization ability on new data. However due to the
limited size of audio tagging data such as DCASE data, the trained models tend
to result in overfitting of the network. Previous data augmentation methods
such as pitch shifting, time stretching and adding background noise do not show
much improvement in audio tagging. In this paper, we explore the sample mixed
data augmentation for the domestic audio tagging task, including mixup,
SamplePairing and extrapolation. We apply a convolutional recurrent neural
network (CRNN) with attention module with log-scaled mel spectrum as a baseline
system. In our experiments, we achieve an state-of-the-art of equal error rate
(EER) of 0.10 on DCASE 2016 task4 dataset with mixup approach, outperforming
the baseline system without data augmentation.Comment: submitted to the workshop of Detection and Classification of Acoustic
Scenes and Events 2018 (DCASE 2018), 19-20 November 2018, Surrey, U
Environmental Sound Classification with Parallel Temporal-spectral Attention
Convolutional neural networks (CNN) are one of the best-performing neural
network architectures for environmental sound classification (ESC). Recently,
temporal attention mechanisms have been used in CNN to capture the useful
information from the relevant time frames for audio classification, especially
for weakly labelled data where the onset and offset times of the sound events
are not applied. In these methods, however, the inherent spectral
characteristics and variations are not explicitly exploited when obtaining the
deep features. In this paper, we propose a novel parallel temporal-spectral
attention mechanism for CNN to learn discriminative sound representations,
which enhances the temporal and spectral features by capturing the importance
of different time frames and frequency bands. Parallel branches are constructed
to allow temporal attention and spectral attention to be applied respectively
in order to mitigate interference from the segments without the presence of
sound events. The experiments on three environmental sound classification (ESC)
datasets and two acoustic scene classification (ASC) datasets show that our
method improves the classification performance and also exhibits robustness to
noise.Comment: submitted to INTERSPEECH202
- …