1 research outputs found

    Acoustic scene classification using joint time-frequency image-based feature representations

    No full text
    The classification of acoustic scenes is important in emerging applications such as automatic audio surveillance, machine listening and multimedia content analysis. In this paper, we present an approach for acoustic scene classification by using joint time-frequency image-based feature representations. In acoustic scene classification, joint time-frequency representation (TFR) is shown to better represent important information across a wide range of low and middle frequencies in the audio signal. The audio signal is converted to Constant-Q Transform (CQT) and Mel-spectrum TFRs and local binary patterns (LBP) are used to extract the features from these TFRs. To ensure localized spectral information is not lost, the TFRs are divided into a number of zones. Then, we perform score level fusion to further improve the classification performance accuracy. Our technique achieves a competitive performance with a classification accuracy of 83.4% on the DCASE 2016 development dataset compared to the existing current state of the ar
    corecore