2 research outputs found
A Joint Framework for Audio Tagging and Weakly Supervised Acoustic Event Detection Using DenseNet with Global Average Pooling
This paper proposes a network architecture mainly designed for audio tagging,
which can also be used for weakly supervised acoustic event detection (AED).
The proposed network consists of a modified DenseNet as the feature extractor,
and a global average pooling (GAP) layer to predict frame-level labels at
inference time. This architecture is inspired by the work proposed by Zhou et
al., a well-known framework using GAP to localize visual objects given
image-level labels. While most of the previous works on weakly supervised AED
used recurrent layers with attention-based mechanism to localize acoustic
events, the proposed network directly localizes events using the feature map
extracted by DenseNet without any recurrent layers. In the audio tagging task
of DCASE 2017, our method significantly outperforms the state-of-the-art method
in F1 score by 5.3% on the dev set, and 6.0% on the eval set in terms of
absolute values. For weakly supervised AED task in DCASE 2018, our model
outperforms the state-of-the-art method in event-based F1 by 8.1% on the dev
set, and 0.5% on the eval set in terms of absolute values, by using data
augmentation and tri-training to leverage unlabeled data.Comment: Accepted by Interspeech 202
Joint Weakly Supervised AT and AED Using Deep Feature Distillation and Adaptive Focal Loss
A good joint training framework is very helpful to improve the performances
of weakly supervised audio tagging (AT) and acoustic event detection (AED)
simultaneously. In this study, we propose three methods to improve the best
teacher-student framework of DCASE2019 Task 4 for both AT and AED tasks. A
frame-level target-events based deep feature distillation is first proposed, it
aims to leverage the potential of limited strong-labeled data in weakly
supervised framework to learn better intermediate feature maps. Then we propose
an adaptive focal loss and two-stage training strategy to enable an effective
and more accurate model training, in which the contribution of
difficult-to-classify and easy-to-classify acoustic events to the total cost
function can be automatically adjusted. Furthermore, an event-specific post
processing is designed to improve the prediction of target event time-stamps.
Our experiments are performed on the public DCASE2019 Task4 dataset, and
results show that our approach achieves competitive performances in both AT
(49.8% F1-score) and AED (81.2% F1-score) tasks