47 research outputs found
A Two-Stage Approach to Device-Robust Acoustic Scene Classification
To improve device robustness, a highly desirable key feature of a competitive
data-driven acoustic scene classification (ASC) system, a novel two-stage
system based on fully convolutional neural networks (CNNs) is proposed. Our
two-stage system leverages on an ad-hoc score combination based on two CNN
classifiers: (i) the first CNN classifies acoustic inputs into one of three
broad classes, and (ii) the second CNN classifies the same inputs into one of
ten finer-grained classes. Three different CNN architectures are explored to
implement the two-stage classifiers, and a frequency sub-sampling scheme is
investigated. Moreover, novel data augmentation schemes for ASC are also
investigated. Evaluated on DCASE 2020 Task 1a, our results show that the
proposed ASC system attains a state-of-the-art accuracy on the development set,
where our best system, a two-stage fusion of CNN ensembles, delivers a 81.9%
average accuracy among multi-device test data, and it obtains a significant
improvement on unseen devices. Finally, neural saliency analysis with class
activation mapping (CAM) gives new insights on the patterns learnt by our
models.Comment: Submitted to ICASSP 2021. Code available:
https://github.com/MihawkHu/DCASE2020_task
An Acoustic Segment Model Based Segment Unit Selection Approach to Acoustic Scene Classification with Partial Utterances
In this paper, we propose a sub-utterance unit selection framework to remove
acoustic segments in audio recordings that carry little information for
acoustic scene classification (ASC). Our approach is built upon a universal set
of acoustic segment units covering the overall acoustic scene space. First,
those units are modeled with acoustic segment models (ASMs) used to tokenize
acoustic scene utterances into sequences of acoustic segment units. Next,
paralleling the idea of stop words in information retrieval, stop ASMs are
automatically detected. Finally, acoustic segments associated with the stop
ASMs are blocked, because of their low indexing power in retrieval of most
acoustic scenes. In contrast to building scene models with whole utterances,
the ASM-removed sub-utterances, i.e., acoustic utterances without stop acoustic
segments, are then used as inputs to the AlexNet-L back-end for final
classification. On the DCASE 2018 dataset, scene classification accuracy
increases from 68%, with whole utterances, to 72.1%, with segment selection.
This represents a competitive accuracy without any data augmentation, and/or
ensemble strategy. Moreover, our approach compares favourably to AlexNet-L with
attention.Comment: Accepted by Interspeech 202