3 research outputs found
Affinity Mixup for Weakly Supervised Sound Event Detection
The weakly supervised sound event detection problem is the task of predicting
the presence of sound events and their corresponding starting and ending points
in a weakly labeled dataset. A weak dataset associates each training sample (a
short recording) to one or more present sources. Networks that solely rely on
convolutional and recurrent layers cannot directly relate multiple frames in a
recording. Motivated by attention and graph neural networks, we introduce the
concept of an affinity mixup to incorporate time-level similarities and make a
connection between frames. This regularization technique mixes up features in
different layers using an adaptive affinity matrix. Our proposed affinity mixup
network improves over state-of-the-art techniques event-F1 scores by
Guided learning for weakly-labeled semi-supervised sound event detection
We propose a simple but efficient method termed Guided Learning for
weakly-labeled semi-supervised sound event detection (SED). There are two
sub-targets implied in weakly-labeled SED: audio tagging and boundary
detection. Instead of designing a single model by considering a trade-off
between the two sub-targets, we design a teacher model aiming at audio tagging
to guide a student model aiming at boundary detection to learn using the
unlabeled data. The guidance is guaranteed by the audio tagging performance gap
of the two models. In the meantime, the student model liberated from the
trade-off is able to provide more excellent boundary detection results. We
propose a principle to design such two models based on the relation between the
temporal compression scale and the two sub-targets. We also propose an
end-to-end semi-supervised learning process for these two models to enable
their abilities to rise alternately. Experiments on the DCASE2018 Task4 dataset
show that our approach achieves competitive performance.Comment: Accepted by ICASSP202
Towards duration robust weakly supervised sound event detection
Sound event detection (SED) is the task of tagging the absence or presence of
audio events and their corresponding interval within a given audio clip. While
SED can be done using supervised machine learning, where training data is fully
labeled with access to per event timestamps and duration, our work focuses on
weakly-supervised sound event detection (WSSED), where prior knowledge about an
event's duration is unavailable. Recent research within the field focuses on
improving segment- and event-level localization performance for specific
datasets regarding specific evaluation metrics. Specifically, well-performing
event-level localization requires fully labeled development subsets to obtain
event duration estimates, which significantly benefits localization
performance. Moreover, well-performing segment-level localization models output
predictions at a coarse-scale (e.g., 1 second), hindering their deployment on
datasets containing very short events (< 1 second). This work proposes a
duration robust CRNN (CDur) framework, which aims to achieve competitive
performance in terms of segment- and event-level localization. This paper
proposes a new post-processing strategy named "Triple Threshold" and
investigates two data augmentation methods along with a label smoothing method
within the scope of WSSED. Evaluation of our model is done on the DCASE2017 and
2018 Task 4 datasets, and URBAN-SED. Our model outperforms other approaches on
the DCASE2018 and URBAN-SED datasets without requiring prior duration
knowledge. In particular, our model is capable of similar performance to
strongly-labeled supervised models on the URBAN-SED dataset. Lastly, ablation
experiments to reveal that without post-processing, our model's localization
performance drop is significantly lower compared with other approaches