3 research outputs found

    Cultural specificities in Carnatic and Hindustani music: Commentary on the Saraga Open Dataset

    Get PDF
    This commentary explores features of the "Saraga" article and open dataset, discussing some of the issues arising. I argue that the CompMusic project and this resulting dataset are impressive for their sensitivity to cultural specificities of the Hindustani and Carnatic musical styles; for example, the dataset includes manual annotations based on music theoretical concepts from within the styles, rather than imposing conceptual categories from outside. However, I propose there are aspects of the dataset's manual annotations that require clarification in order for them to be used as ground truths by other researchers. In addition, I raise questions regarding the representativeness of the dataset – an issue that has ethical implications

    Phrase-based rāga recognition using vector space modeling

    No full text
    Comunicació presentada a la 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), celebrada els dies 20 a 25 de març a Xangai, Xina.Automatic raga recognition is one of the fundamental computational tasks in Indian art music. Motivated by the way seasoned listeners identify ragas, we propose a raga recognition approach based on melodic phrases. Firstly, we extract melodic patterns from a collection of audio recordings in an unsupervised way. Next, we group similar patterns by exploiting complex networks concepts and techniques. Drawing an analogy to topic modeling in text classification, we then represent audio recordings using a vector space model. Finally, we employ a number of classification strategies to build a predictive model for raga recognition. To evaluate our approach, we compile a music collection of over 124 hours, comprising 480 recordings and 40 ragas. We obtain 70% accuracy with the full 40-raga collection, and up to 92% accuracy with its 10-raga subset. We show that phrase-based raga recognition is a successful strategy, on par with the state of the art, and sometimes outperforms it. A by-product of our approach, which arguably is as important as the task of raga recognition, is the identification of raga-phrases. These phrases can be used as a dictionary of semantically-meaningful melodic units for several computational tasks in Indian art music.This work is partly supported by the European Research Council under the European Unions Seventh Framework Program, as part of the CompMusic project (ERC grant agreement 267583)

    Phrase-based rāga recognition using vector space modeling

    No full text
    Comunicació presentada a la 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), celebrada els dies 20 a 25 de març a Xangai, Xina.Automatic raga recognition is one of the fundamental computational tasks in Indian art music. Motivated by the way seasoned listeners identify ragas, we propose a raga recognition approach based on melodic phrases. Firstly, we extract melodic patterns from a collection of audio recordings in an unsupervised way. Next, we group similar patterns by exploiting complex networks concepts and techniques. Drawing an analogy to topic modeling in text classification, we then represent audio recordings using a vector space model. Finally, we employ a number of classification strategies to build a predictive model for raga recognition. To evaluate our approach, we compile a music collection of over 124 hours, comprising 480 recordings and 40 ragas. We obtain 70% accuracy with the full 40-raga collection, and up to 92% accuracy with its 10-raga subset. We show that phrase-based raga recognition is a successful strategy, on par with the state of the art, and sometimes outperforms it. A by-product of our approach, which arguably is as important as the task of raga recognition, is the identification of raga-phrases. These phrases can be used as a dictionary of semantically-meaningful melodic units for several computational tasks in Indian art music.This work is partly supported by the European Research Council under the European Unions Seventh Framework Program, as part of the CompMusic project (ERC grant agreement 267583)
    corecore