144 research outputs found

    Automatic Raga Recognition in Hindustani Classical Music

    Full text link
    Raga is the central melodic concept in Hindustani Classical Music. It has a complex structure, often characterized by pathos. In this paper, we describe a technique for Automatic Raga Recognition, based on pitch distributions. We are able to successfully classify ragas with a commendable accuracy on our test dataset.Comment: Seminar on Computer Music, RWTH Aachen, http://hpac.rwth-aachen.de/teaching/sem-mus-17/Reports/Alekh.pd

    Cultural specificities in Carnatic and Hindustani music: Commentary on the Saraga Open Dataset

    Get PDF
    This commentary explores features of the "Saraga" article and open dataset, discussing some of the issues arising. I argue that the CompMusic project and this resulting dataset are impressive for their sensitivity to cultural specificities of the Hindustani and Carnatic musical styles; for example, the dataset includes manual annotations based on music theoretical concepts from within the styles, rather than imposing conceptual categories from outside. However, I propose there are aspects of the dataset's manual annotations that require clarification in order for them to be used as ground truths by other researchers. In addition, I raise questions regarding the representativeness of the dataset – an issue that has ethical implications

    Acoustic Feature Identification to Recognize Rag Present in Borgit

    Get PDF
    In the world of Indian classical music, raga recognition is a crucial undertaking. Due to its particular sound qualities, the traditional wind instrument known as the borgit presents special difficulties for automatic raga recognition. In this research, we investigate the use of auditory feature identification methods to create a reliable raga recognition system for Borgit performances. Each of the Borgits, the devotional song of Assam is enriched with rag and each rag has unique melodious tune. This paper has carried out few experiments on the audio samples of rags and a few Borgits sung with those rugs. In this manuscript three mostly used rags and a few Borgits  with these rags are considered for the experiment. Acoustic features considred here are FFT (Fast Fourier Transform), ZCR (Zero Crossing Rates), Mean and Standard deviation of pitch contour and RMS(Root Mean Square). After evaluation and analysis it is seen that FFT  and ZCR are two noteworthy acoustic features that helps to identify the rag present in Borgits. At last K-means clustering was applied on the FFT and ZCR values of the Borgits and were able to find correct grouping according to rags present there. This research validates FFT and ZCR as most precise acoustic parameters for rag identification in Borgit. Here researchers had observed roles of Standard deviation of pitch contour and RMS values of the audio samples in rag identification. &nbsp

    Emotion Based Information Retrieval System

    Get PDF
    Abstract—Music emotion plays an important role in music retrieval, mood detection and other music-related applications. Many issues for music emotion recognition have been addressed by different disciplines such as physiology, psychology, cognitive science and musicology. We present a support vector regression (SVR) based Music Information Retrieval System (Emotion based). We have chosen the “Raga” paradigm of Indian classical music as the basis of our formal model since it is well understood and semi-formal in nature. Also a lot of work has been done on Western Music and Karnataka classical Music Initially in the system features are extracted from music. These features are mapped into emotion categories on the Tellegen-Watson Clark model of mood which is an extension to the Thayer’s two-dimensional emotion model. Two regression functions are trained using SVR and then distance and angle values are predicted A categorical Response Graph is generated in this module which shows the variation of emotion

    From West to East: Who can understand the music of the others better?

    Get PDF
    Recent developments in MIR have led to several benchmark deep learning models whose embeddings can be used for a variety of downstream tasks. At the same time, the vast majority of these models have been trained on Western pop/rock music and related styles. This leads to research questions on whether these models can be used to learn representations for different music cultures and styles, or whether we can build similar music audio embedding models trained on data from different cultures or styles. To that end, we leverage transfer learning methods to derive insights about the similarities between the different music cultures to which the data belongs to. We use two Western music datasets, two traditional/folk datasets coming from eastern Mediterranean cultures, and two datasets belonging to Indian art music. Three deep audio embedding models are trained and transferred across domains, including two CNN-based and a Transformer-based architecture, to perform auto-tagging for each target domain dataset. Experimental results show that competitive performance is achieved in all domains via transfer learning, while the best source dataset varies for each music culture. The implementation and the trained models are both provided in a public repository

    ZOOTOPIA: How Does Music Make A Statement For Diversity In The Modern Society

    Get PDF
    Diversity is a big topic in the movie Zootopia, in which music plays an tremendously important role in presenting the concept. This essay aims to identify and investigate the different musical styles that influence the score, how the various styles are put together, and how these influences are translated for the storytelling purpose in the film.https://remix.berklee.edu/graduate-studies-scoring/1066/thumbnail.jp
    • …
    corecore