6 research outputs found

    Spectrogram classification using dissimilarity space

    Get PDF
    In this work, we combine a Siamese neural network and different clustering techniques to generate a dissimilarity space that is then used to train an SVM for automated animal audio classification. The animal audio datasets used are (i) birds and (ii) cat sounds, which are freely available. We exploit different clustering methods to reduce the spectrograms in the dataset to a number of centroids that are used to generate the dissimilarity space through the Siamese network. Once computed, we use the dissimilarity space to generate a vector space representation of each pattern, which is then fed into an support vector machine (SVM) to classify a spectrogram by its dissimilarity vector. Our study shows that the proposed approach based on dissimilarity space performs well on both classification problems without ad-hoc optimization of the clustering methods. Moreover, results show that the fusion of CNN-based approaches applied to the animal audio classification problem works better than the stand-alone CNNs

    Animal sound classification using dissimilarity spaces

    Get PDF
    The classifier system proposed in this work combines the dissimilarity spaces produced by a set of Siamese neural networks (SNNs) designed using four different backbones with different clustering techniques for training SVMs for automated animal audio classification. The system is evaluated on two animal audio datasets: one for cat and another for bird vocalizations. The proposed approach uses clustering methods to determine a set of centroids (in both a supervised and unsupervised fashion) from the spectrograms in the dataset. Such centroids are exploited to generate the dissimilarity space through the Siamese networks. In addition to feeding the SNNs with spectrograms, experiments process the spectrograms using the heterogeneous auto-similarities of characteristics. Once the similarity spaces are computed, each pattern is \u201cprojected\u201d into the space to obtain a vector space representation; this descriptor is then coupled to a support vector machine (SVM) to classify a spectrogram by its dissimilarity vector. Results demonstrate that the proposed approach performs competitively (without ad-hoc optimization of the clustering methods) on both animal vocalization datasets. To further demonstrate the power of the proposed system, the best standalone approach is also evaluated on the challenging Dataset for Environmental Sound Classification (ESC50) dataset

    Probabilistic and Deep Learning Algorithms for the Analysis of Imagery Data

    Get PDF
    Accurate object classification is a challenging problem for various low to high resolution imagery data. This applies to both natural as well as synthetic image datasets. However, each object recognition dataset poses its own distinct set of domain-specific problems. In order to address these issues, we need to devise intelligent learning algorithms which require a deep understanding and careful analysis of the feature space. In this thesis, we introduce three new learning frameworks for the analysis of both airborne images (NAIP dataset) and handwritten digit datasets without and with noise (MNIST and n-MNIST respectively). First, we propose a probabilistic framework for the analysis of the NAIP dataset which includes (1) an unsupervised segmentation module based on the Statistical Region Merging algorithm, (2) a feature extraction module that extracts a set of standard hand-crafted texture features from the images, (3) a supervised classification algorithm based on Feedforward Backpropagation Neural Networks, and (4) a structured prediction framework using Conditional Random Fields that integrates the results of the segmentation and classification modules into a single composite model to generate the final class labels. Next, we introduce two new datasets SAT-4 and SAT-6 sampled from the NAIP imagery and use them to evaluate a multitude of Deep Learning algorithms including Deep Belief Networks (DBN), Convolutional Neural Networks (CNN) and Stacked Autoencoders (SAE) for generating class labels. Finally, we propose a learning framework by integrating hand-crafted texture features with a DBN. A DBN uses an unsupervised pre-training phase to perform initialization of the parameters of a Feedforward Backpropagation Neural Network to a global error basin which can then be improved using a round of supervised fine-tuning using Feedforward Backpropagation Neural Networks. These networks can subsequently be used for classification. In the following discussion, we show that the integration of hand-crafted features with DBN shows significant improvement in performance as compared to traditional DBN models which take raw image pixels as input. We also investigate why this integration proves to be particularly useful for aerial datasets using a statistical analysis based on Distribution Separability Criterion. Then we introduce a new dataset called noisy-MNIST (n-MNIST) by adding (1) additive white gaussian noise (AWGN), (2) motion blur and (3) Reduced contrast and AWGN to the MNIST dataset and present a learning algorithm by combining probabilistic quadtrees and Deep Belief Networks. This dynamic integration of the Deep Belief Network with the probabilistic quadtrees provide significant improvement over traditional DBN models on both the MNIST and the n-MNIST datasets. Finally, we extend our experiments on aerial imagery to the class of general texture images and present a theoretical analysis of Deep Neural Networks applied to texture classification. We derive the size of the feature space of textural features and also derive the Vapnik-Chervonenkis dimension of certain classes of Neural Networks. We also derive some useful results on intrinsic dimension and relative contrast of texture datasets and use these to highlight the differences between texture datasets and general object recognition datasets

    Characterizing and classifying music genres and subgenres via association analysis

    Get PDF
    In this thesis, we investigate the problem of automatic music genre classification in the field of Music Information Retrieval (MIR). MIR seeks to apply convenient automated solutions to many music-related tasks that are too tedious to perform by hand. These tasks often deal with vast quantities of music data. An effective automatic music genre classification approach may be useful for other tasks in MIR as well. Association analysis is a technique used to explore the inherent relationships among data objects in a problem domain. We present two novel approaches which capture genre characteristics through the use of association analysis on large music datasets. The first approach extracts the characteristic features of genres and uses these features to perform classification. The second approach attempts to improve on the first one by utilizing a pairwise dichotomy-like strategy. We then consider applying the second approach to the problem of automatic subgenre classification

    Reconhecimento de gêneros musicais utilizando espectrogramas com combinação de classificadores

    Get PDF
    Resumo: Com a rápida expansão da Internet um imenso volume de dados tem se tornado disponível on-line. Entretanto, essa informação não segue um padrão de apresentação e não está disponível de maneira estruturada. Devido a isso, tarefas como busca, recuperação, indexação e sumarização automática dessas informações se tornaram problemas importantes, cujas soluções coadunam no sentido de facilitar o acesso a estes conteúdos. Há algum tempo, a maior parte das informações sobre dados multimídia é organizada e classificada com base em informações textuais. A música digital é um dos mais importantes tipos de dados distribuídos na Internet. Existem muitos estudos a respeito da análise de conteúdo de áudio usando diferentes características e métodos. Um componente fundamental para um sistema de recuperação de informações de áudio baseado em conteúdo é um modulo de classificação automática de gêneros musicais. Os gêneros musicais são rótulos categóricos criados por especialistas humanos e por amadores para determinar ou designar estilos de música. Em alguns trabalhos verificou-se que o gênero musical é um importante atributo para os usuários na organização e recuperação de arquivos de música. Este trabalho propõe o uso de características inovadoras para a representação do conteúdo das músicas, obtidas a partir de imagens de espectrograma geradas a partir do sinal do áudio, para aplicação em tarefas de reconhecimento de gêneros musicais. As imagens de espectrograma apresentam a textura como principal atributo visual. Assim, as características propostas foram obtidas utilizando-se alguns descritores de textura propostos na literatura de processamento de imagens, em particular os descritores Local Binary Pattern e Local Phase Quantization, pois ambos se destacaram por apresentar um bom desempenho. Também foram investigados os impactos proporcionados pelo uso de uma estratégia de preservação de informações locais, através do zoneamento das imagens. O zoneamento propiciou a criação de múltiplos classificadores, um para cada zona, e os melhores resultados foram obtidos com a fusão das saídas destes classificadores. A maioria dos experimentos foi realizada sobre a base LMD com o uso de \artist lter". O método também foi experimentado sobre a base ISMIR 2004. Os melhores resultados obtidos são comparáveis aos melhores resultados já apresentados na literatura utilizando outras abordagens. Considerando os experimentos com a base LMD e com o uso de \artist _lter", os resultados obtidos são superiores ao melhor resultado descrito na literatura até então. Finalmente, seleção dinâmica de classificadores e seleção de características foram avaliadas e mostraram resultados promissores
    corecore