179 research outputs found
Sample-level CNN Architectures for Music Auto-tagging Using Raw Waveforms
Recent work has shown that the end-to-end approach using convolutional neural
network (CNN) is effective in various types of machine learning tasks. For
audio signals, the approach takes raw waveforms as input using an 1-D
convolution layer. In this paper, we improve the 1-D CNN architecture for music
auto-tagging by adopting building blocks from state-of-the-art image
classification models, ResNets and SENets, and adding multi-level feature
aggregation to it. We compare different combinations of the modules in building
CNN architectures. The results show that they achieve significant improvements
over previous state-of-the-art models on the MagnaTagATune dataset and
comparable results on Million Song Dataset. Furthermore, we analyze and
visualize our model to show how the 1-D CNN operates.Comment: Accepted for publication at ICASSP 201
Listening to the World Improves Speech Command Recognition
We study transfer learning in convolutional network architectures applied to
the task of recognizing audio, such as environmental sound events and speech
commands. Our key finding is that not only is it possible to transfer
representations from an unrelated task like environmental sound classification
to a voice-focused task like speech command recognition, but also that doing so
improves accuracies significantly. We also investigate the effect of increased
model capacity for transfer learning audio, by first validating known results
from the field of Computer Vision of achieving better accuracies with
increasingly deeper networks on two audio datasets: UrbanSound8k and the newly
released Google Speech Commands dataset. Then we propose a simple multiscale
input representation using dilated convolutions and show that it is able to
aggregate larger contexts and increase classification performance. Further, the
models trained using a combination of transfer learning and multiscale input
representations need only 40% of the training data to achieve similar
accuracies as a freshly trained model with 100% of the training data. Finally,
we demonstrate a positive interaction effect for the multiscale input and
transfer learning, making a case for the joint application of the two
techniques.Comment: 8 page
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
- …