3 research outputs found

    An Improved Acoustic Scene Classification Method Using Convolutional Neural Networks (CNNs)

    Get PDF
    Predicting acoustic environment by analyzing and classifying sound recording of the scene is an emerging research area. This paper presents and compares different acoustic scene classification (ASC) methods to differentiate between different acoustic environments. In particular, two deep learning techniques of classifica-tion i.e. Deep Neural Network (DNN) and Convolution Neural Network (CNN) have been applied using a combination of Mel-Frequency Cepstral Coefficients (MFCCs) and Log Mel energies as features. DNN and CNN are state-of-the-art techniques which are being used widely in speech recognition, computer vision, and natural language processing applications. These techniques have recently achieved great success in the field of audio classification for various applications. Both techniques have been implemented and tuned by performing a variety of experiments with different hyper parameters, hidden layers and units on public benchmark datasets provided in the DCASE 2017 challenge. The proposed method uses frame level randomization of the combined acoustic features i.e. MFCC and log mel energy, for training of model to achieve higher accuracy with DNN and CNN. It has reported higher accuracy than the previous work done on public benchmark datasets provided in the DCASE 2017 challenge. It is observed that DNN achieved 83.45% and CNN achieved 83.65% accuracy that is higher than the previous work done on public benchmark datasets provided in the DCASE 2017 challenge

    Music Genre Classification with ResNet and Bi-GRU Using Visual Spectrograms

    Full text link
    Music recommendation systems have emerged as a vital component to enhance user experience and satisfaction for the music streaming services, which dominates music consumption. The key challenge in improving these recommender systems lies in comprehending the complexity of music data, specifically for the underpinning music genre classification. The limitations of manual genre classification have highlighted the need for a more advanced system, namely the Automatic Music Genre Classification (AMGC) system. While traditional machine learning techniques have shown potential in genre classification, they heavily rely on manually engineered features and feature selection, failing to capture the full complexity of music data. On the other hand, deep learning classification architectures like the traditional Convolutional Neural Networks (CNN) are effective in capturing the spatial hierarchies but struggle to capture the temporal dynamics inherent in music data. To address these challenges, this study proposes a novel approach using visual spectrograms as input, and propose a hybrid model that combines the strength of the Residual neural Network (ResNet) and the Gated Recurrent Unit (GRU). This model is designed to provide a more comprehensive analysis of music data, offering the potential to improve the music recommender systems through achieving a more comprehensive analysis of music data and hence potentially more accurate genre classification
    corecore