18 research outputs found
Attention and Localization based on a Deep Convolutional Recurrent Model for Weakly Supervised Audio Tagging
Audio tagging aims to perform multi-label classification on audio chunks and
it is a newly proposed task in the Detection and Classification of Acoustic
Scenes and Events 2016 (DCASE 2016) challenge. This task encourages research
efforts to better analyze and understand the content of the huge amounts of
audio data on the web. The difficulty in audio tagging is that it only has a
chunk-level label without a frame-level label. This paper presents a weakly
supervised method to not only predict the tags but also indicate the temporal
locations of the occurred acoustic events. The attention scheme is found to be
effective in identifying the important frames while ignoring the unrelated
frames. The proposed framework is a deep convolutional recurrent model with two
auxiliary modules: an attention module and a localization module. The proposed
algorithm was evaluated on the Task 4 of DCASE 2016 challenge. State-of-the-art
performance was achieved on the evaluation set with equal error rate (EER)
reduced from 0.13 to 0.11, compared with the convolutional recurrent baseline
system.Comment: 5 pages, submitted to interspeech201
Large-scale weakly supervised audio classification using gated convolutional neural network
In this paper, we present a gated convolutional neural network and a temporal
attention-based localization method for audio classification, which won the 1st
place in the large-scale weakly supervised sound event detection task of
Detection and Classification of Acoustic Scenes and Events (DCASE) 2017
challenge. The audio clips in this task, which are extracted from YouTube
videos, are manually labeled with one or a few audio tags but without
timestamps of the audio events, which is called as weakly labeled data. Two
sub-tasks are defined in this challenge including audio tagging and sound event
detection using this weakly labeled data. A convolutional recurrent neural
network (CRNN) with learnable gated linear units (GLUs) non-linearity applied
on the log Mel spectrogram is proposed. In addition, a temporal attention
method is proposed along the frames to predicate the locations of each audio
event in a chunk from the weakly labeled data. We ranked the 1st and the 2nd as
a team in these two sub-tasks of DCASE 2017 challenge with F value 55.6\% and
Equal error 0.73, respectively.Comment: submitted to ICASSP2018, summary on the 1st place system in DCASE2017
task4 challeng
Capsule Routing for Sound Event Detection
The detection of acoustic scenes is a challenging problem in which
environmental sound events must be detected from a given audio signal. This
includes classifying the events as well as estimating their onset and offset
times. We approach this problem with a neural network architecture that uses
the recently-proposed capsule routing mechanism. A capsule is a group of
activation units representing a set of properties for an entity of interest,
and the purpose of routing is to identify part-whole relationships between
capsules. That is, a capsule in one layer is assumed to belong to a capsule in
the layer above in terms of the entity being represented. Using capsule
routing, we wish to train a network that can learn global coherence implicitly,
thereby improving generalization performance. Our proposed method is evaluated
on Task 4 of the DCASE 2017 challenge. Results show that classification
performance is state-of-the-art, achieving an F-score of 58.6%. In addition,
overfitting is reduced considerably compared to other architectures.Comment: Paper accepted for 26th European Signal Processing Conference
(EUSIPCO 2018
Polyphonic audio tagging with sequentially labelled data using CRNN with learnable gated linear units
Audio tagging aims to detect the types of sound events occurring in an audio
recording. To tag the polyphonic audio recordings, we propose to use
Connectionist Temporal Classification (CTC) loss function on the top of
Convolutional Recurrent Neural Network (CRNN) with learnable Gated Linear Units
(GLU-CTC), based on a new type of audio label data: Sequentially Labelled Data
(SLD). In GLU-CTC, CTC objective function maps the frame-level probability of
labels to clip-level probability of labels. To compare the mapping ability of
GLU-CTC for sound events, we train a CRNN with GLU based on Global Max Pooling
(GLU-GMP) and a CRNN with GLU based on Global Average Pooling (GLU-GAP). And we
also compare the proposed GLU-CTC system with the baseline system, which is a
CRNN trained using CTC loss function without GLU. The experiments show that the
GLU-CTC achieves an Area Under Curve (AUC) score of 0.882 in audio tagging,
outperforming the GLU-GMP of 0.803, GLU-GAP of 0.766 and baseline system of
0.837. That means based on the same CRNN model with GLU, the performance of CTC
mapping is better than the GMP and GAP mapping. Given both based on the CTC
mapping, the CRNN with GLU outperforms the CRNN without GLU.Comment: DCASE2018 Workshop. arXiv admin note: text overlap with
arXiv:1808.0193