3,443 research outputs found
Furnishing Sound Event Detection with Language Model Abilities
Recently, the ability of language models (LMs) has attracted increasing
attention in visual cross-modality. In this paper, we further explore the
generation capacity of LMs for sound event detection (SED), beyond the visual
domain. Specifically, we propose an elegant method that aligns audio features
and text features to accomplish sound event classification and temporal
location. The framework consists of an acoustic encoder, a contrastive module
that align the corresponding representations of the text and audio, and a
decoupled language decoder that generates temporal and event sequences from the
audio characteristic. Compared with conventional works that require complicated
processing and barely utilize limited audio features, our model is more concise
and comprehensive since language model directly leverage its semantic
capabilities to generate the sequences. We investigate different decoupling
modules to demonstrate the effectiveness for timestamps capture and event
classification. Evaluation results show that the proposed method achieves
accurate sequences of sound event detection.Comment: 8 pages,2 figures,published to AAA
- …