47 research outputs found
Environmental Sound Classification with Parallel Temporal-spectral Attention
Convolutional neural networks (CNN) are one of the best-performing neural
network architectures for environmental sound classification (ESC). Recently,
temporal attention mechanisms have been used in CNN to capture the useful
information from the relevant time frames for audio classification, especially
for weakly labelled data where the onset and offset times of the sound events
are not applied. In these methods, however, the inherent spectral
characteristics and variations are not explicitly exploited when obtaining the
deep features. In this paper, we propose a novel parallel temporal-spectral
attention mechanism for CNN to learn discriminative sound representations,
which enhances the temporal and spectral features by capturing the importance
of different time frames and frequency bands. Parallel branches are constructed
to allow temporal attention and spectral attention to be applied respectively
in order to mitigate interference from the segments without the presence of
sound events. The experiments on three environmental sound classification (ESC)
datasets and two acoustic scene classification (ASC) datasets show that our
method improves the classification performance and also exhibits robustness to
noise.Comment: submitted to INTERSPEECH202
Between-class Learning for Image Classification
In this paper, we propose a novel learning method for image classification
called Between-Class learning (BC learning). We generate between-class images
by mixing two images belonging to different classes with a random ratio. We
then input the mixed image to the model and train the model to output the
mixing ratio. BC learning has the ability to impose constraints on the shape of
the feature distributions, and thus the generalization ability is improved. BC
learning is originally a method developed for sounds, which can be digitally
mixed. Mixing two image data does not appear to make sense; however, we argue
that because convolutional neural networks have an aspect of treating input
data as waveforms, what works on sounds must also work on images. First, we
propose a simple mixing method using internal divisions, which surprisingly
proves to significantly improve performance. Second, we propose a mixing method
that treats the images as waveforms, which leads to a further improvement in
performance. As a result, we achieved 19.4% and 2.26% top-1 errors on
ImageNet-1K and CIFAR-10, respectively.Comment: 11 pages, 8 figures, published as a conference paper at CVPR 201