496 research outputs found
Sample-level CNN Architectures for Music Auto-tagging Using Raw Waveforms
Recent work has shown that the end-to-end approach using convolutional neural
network (CNN) is effective in various types of machine learning tasks. For
audio signals, the approach takes raw waveforms as input using an 1-D
convolution layer. In this paper, we improve the 1-D CNN architecture for music
auto-tagging by adopting building blocks from state-of-the-art image
classification models, ResNets and SENets, and adding multi-level feature
aggregation to it. We compare different combinations of the modules in building
CNN architectures. The results show that they achieve significant improvements
over previous state-of-the-art models on the MagnaTagATune dataset and
comparable results on Million Song Dataset. Furthermore, we analyze and
visualize our model to show how the 1-D CNN operates.Comment: Accepted for publication at ICASSP 201
Listening to the World Improves Speech Command Recognition
We study transfer learning in convolutional network architectures applied to
the task of recognizing audio, such as environmental sound events and speech
commands. Our key finding is that not only is it possible to transfer
representations from an unrelated task like environmental sound classification
to a voice-focused task like speech command recognition, but also that doing so
improves accuracies significantly. We also investigate the effect of increased
model capacity for transfer learning audio, by first validating known results
from the field of Computer Vision of achieving better accuracies with
increasingly deeper networks on two audio datasets: UrbanSound8k and the newly
released Google Speech Commands dataset. Then we propose a simple multiscale
input representation using dilated convolutions and show that it is able to
aggregate larger contexts and increase classification performance. Further, the
models trained using a combination of transfer learning and multiscale input
representations need only 40% of the training data to achieve similar
accuracies as a freshly trained model with 100% of the training data. Finally,
we demonstrate a positive interaction effect for the multiscale input and
transfer learning, making a case for the joint application of the two
techniques.Comment: 8 page
- …