199 research outputs found
Environmental Sound Classification with Parallel Temporal-spectral Attention
Convolutional neural networks (CNN) are one of the best-performing neural
network architectures for environmental sound classification (ESC). Recently,
temporal attention mechanisms have been used in CNN to capture the useful
information from the relevant time frames for audio classification, especially
for weakly labelled data where the onset and offset times of the sound events
are not applied. In these methods, however, the inherent spectral
characteristics and variations are not explicitly exploited when obtaining the
deep features. In this paper, we propose a novel parallel temporal-spectral
attention mechanism for CNN to learn discriminative sound representations,
which enhances the temporal and spectral features by capturing the importance
of different time frames and frequency bands. Parallel branches are constructed
to allow temporal attention and spectral attention to be applied respectively
in order to mitigate interference from the segments without the presence of
sound events. The experiments on three environmental sound classification (ESC)
datasets and two acoustic scene classification (ASC) datasets show that our
method improves the classification performance and also exhibits robustness to
noise.Comment: submitted to INTERSPEECH202
Learning General Audio Representations with Large-Scale Training of Patchout Audio Transformers
The success of supervised deep learning methods is largely due to their
ability to learn relevant features from raw data. Deep Neural Networks (DNNs)
trained on large-scale datasets are capable of capturing a diverse set of
features, and learning a representation that can generalize onto unseen tasks
and datasets that are from the same domain. Hence, these models can be used as
powerful feature extractors, in combination with shallower models as
classifiers, for smaller tasks and datasets where the amount of training data
is insufficient for learning an end-to-end model from scratch. During the past
years, Convolutional Neural Networks (CNNs) have largely been the method of
choice for audio processing. However, recently attention-based transformer
models have demonstrated great potential in supervised settings, outperforming
CNNs. In this work, we investigate the use of audio transformers trained on
large-scale datasets to learn general-purpose representations. We study how the
different setups in these audio transformers affect the quality of their
embeddings. We experiment with the models' time resolution, extracted embedding
level, and receptive fields in order to see how they affect performance on a
variety of tasks and datasets, following the HEAR 2021 NeurIPS challenge
evaluation setup. Our results show that representations extracted by audio
transformers outperform CNN representations. Furthermore, we will show that
transformers trained on Audioset can be extremely effective representation
extractors for a wide range of downstream tasks.Comment: will apear in HEAR: Holistic Evaluation of Audio Representations
Proceedings of Machine Learning Research PMLR 166. Source code:
https://github.com/kkoutini/passt_hear2
Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet
We present a work on low-complexity acoustic scene classification (ASC) with
multiple devices, namely the subtask A of Task 1 of the DCASE2021 challenge.
This subtask focuses on classifying audio samples of multiple devices with a
low-complexity model, where two main difficulties need to be overcome. First,
the audio samples are recorded by different devices, and there is mismatch of
recording devices in audio samples. We reduce the negative impact of the
mismatch of recording devices by using some effective strategies, including
data augmentation (e.g., mix-up, spectrum correction, pitch shift), usages of
multi-patch network structure and channel attention. Second, the model size
should be smaller than a threshold (e.g., 128 KB required by the DCASE2021
challenge). To meet this condition, we adopt a ResNet with both depthwise
separable convolution and channel attention as the backbone network, and
perform model compression. In summary, we propose a low-complexity ASC method
using data augmentation and a lightweight ResNet. Evaluated on the official
development and evaluation datasets, our method obtains classification accuracy
scores of 71.6% and 66.7%, respectively; and obtains Log-loss scores of 1.038
and 1.136, respectively. Our final model size is 110.3 KB which is smaller than
the maximum of 128 KB.Comment: 5 pages, 5 figures, 4 tables. Accepted for publication in the 16th
IEEE International Conference on Signal Processing (IEEE ICSP
- …