1 research outputs found
Multi-Representation Knowledge Distillation For Audio Classification
As an important component of multimedia analysis tasks, audio classification
aims to discriminate between different audio signal types and has received
intensive attention due to its wide applications. Generally speaking, the raw
signal can be transformed into various representations (such as Short Time
Fourier Transform and Mel Frequency Cepstral Coefficients), and information
implied in different representations can be complementary. Ensembling the
models trained on different representations can greatly boost the
classification performance, however, making inference using a large number of
models is cumbersome and computationally expensive. In this paper, we propose a
novel end-to-end collaborative learning framework for the audio classification
task. The framework takes multiple representations as the input to train the
models in parallel. The complementary information provided by different
representations is shared by knowledge distillation. Consequently, the
performance of each model can be significantly promoted without increasing the
computational overhead in the inference stage. Extensive experimental results
demonstrate that the proposed approach can improve the classification
performance and achieve state-of-the-art results on both acoustic scene
classification tasks and general audio tagging tasks