53 research outputs found

    Automatic Transcription of Drum Strokes in Carnatic Music

    Full text link
    The mridangam is a double-headed percussion instrument that plays a key role in Carnatic music concerts. This paper presents a novel automatic transcription algorithm to classify the strokes played on the mridangam. Onset detection is first performed to segment the audio signal into individual strokes, and feature vectors consisting of the DFT magnitude spectrum of the segmented signal are generated. A multi-layer feedforward neural network is trained using the feature vectors as inputs and the manual transcriptions as targets. Since the mridangam is a tonal instrument tuned to a given tonic, tonic invariance is an important feature of the classifier. Tonic invariance is achieved by augmenting the dataset with pitch-shifted copies of the audio. This algorithm consistently yields over 83% accuracy on a held-out test dataset.Comment: 7 pages, 9 figure

    Vocal Source Separation for Carnatic Music

    Get PDF
    Carnatic Music is a Classical music form that originates from the South of India and is extremely varied from Western genres. Music Information Retrieval (MIR) has predominantly been used to tackle problems in western musical genres and cannot be adapted to non western musical styles like Carnatic Music due to the fundamental difference in melody, rhythm, instrumentation, nature of compositions and improvisations. Due to these conceptual differences emerged MIR tasks specific for the use case of Carnatic Music. Researchers have constantly been using domain knowledge and technology driven ideas to tackle tasks like Melodic analysis, Rhythmic analysis and Structural segmentation. Melodic analysis of Carnatic Music has been a cornerstone in MIR research and heavily relies on the singing voice because the singer offers the main melody. The problem is that the singing voice is not isolated and has melodic, percussion and drone instruments as accompaniment. Separating the singing voice from the accompanying instruments usually comes with issues like bleeding of the accompanying instruments and loss of melodic information. This in turn has an adverse effect on the melodic analysis. The datasets used for Carnatic-MIR are concert recordings of different artistes with accompanying instruments and there is a lack of clean isolated singing voice tracks. Existing Source Separation models are trained extensively on multi-track audio of the rock and pop genre and do not generalize well for the use case of Carnatic music. How do we improve Singing Voice Source Separation for Carnatic Music given the above constraints? In this work, the possible contributions to mitigate the existing issue are ; 1) Creating a dataset of isolated Carnatic music stems. 2) Reusing multi-track audio with bleeding from the Saraga dataset. 3) Retraining and fine tuning existing State of the art Source Separation models. We hope that this effort to improve Source Separation for Carnatic Music can help overcome existing shortcomings and generalize well for Carnatic music datasets in the literature and in turn improve melodic analysis of this music culture

    Culturally sensitive strategies for automatic music prediction

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 103-112).Music has been shown to form an essential part of the human experience-every known society engages in music. However, as universal as it may be, music has evolved into a variety of genres, peculiar to particular cultures. In fact people acquire musical skill, understanding, and appreciation specific to the music they have been exposed to. This process of enculturation builds mental structures that form the cognitive basis for musical expectation. In this thesis I argue that in order for machines to perform musical tasks like humans do, in particular to predict music, they need to be subjected to a similar enculturation process by design. This work is grounded in an information theoretic framework that takes cultural context into account. I introduce a measure of musical entropy to analyze the predictability of musical events as a function of prior musical exposure. Then I discuss computational models for music representation that are informed by genre-specific containers for musical elements like notes. Finally I propose a software framework for automatic music prediction. The system extracts a lexicon of melodic, or timbral, and rhythmic primitives from audio, and generates a hierarchical grammar to represent the structure of a particular musical form. To improve prediction accuracy, context can be switched with cultural plug-ins that are designed for specific musical instruments and genres. In listening experiments involving music synthesis a culture-specific design fares significantly better than a culture-agnostic one. Hence my findings support the importance of computational enculturation for automatic music prediction. Furthermore I suggest that in order to sustain and cultivate the diversity of musical traditions around the world it is indispensable that we design culturally sensitive music technology.by Mihir Sarkar.Ph.D

    onsetsync: An R Package for Onset SynchronyAnalysis

    Get PDF

    Computational methods for percussion music analysis : the afro-uruguayan candombe drumming as a case study

    Get PDF
    Most of the research conducted on information technologies applied to music has been largely limited to a few mainstream styles of the so-called `Western' music. The resulting tools often do not generalize properly or cannot be easily extended to other music traditions. So, culture-specific approaches have been recently proposed as a way to build richer and more general computational models for music. This thesis work aims at contributing to the computer-aided study of rhythm, with the focus on percussion music and in the search of appropriate solutions from a culture specifc perspective by considering the Afro-Uruguayan candombe drumming as a case study. This is mainly motivated by its challenging rhythmic characteristics, troublesome for most of the existing analysis methods. In this way, it attempts to push ahead the boundaries of current music technologies. The thesis o ers an overview of the historical, social and cultural context in which candombe drumming is embedded, along with a description of the rhythm. One of the specific contributions of the thesis is the creation of annotated datasets of candombe drumming suitable for computational rhythm analysis. Performances were purposely recorded, and received annotations of metrical information, location of onsets, and sections. A dataset of annotated recordings for beat and downbeat tracking was publicly released, and an audio-visual dataset of performances was obtained, which serves both documentary and research purposes. Part of the dissertation focused on the discovery and analysis of rhythmic patterns from audio recordings. A representation in the form of a map of rhythmic patterns based on spectral features was devised. The type of analyses that can be conducted with the proposed methods is illustrated with some experiments. The dissertation also systematically approached (to the best of our knowledge, for the first time) the study and characterization of the micro-rhythmical properties of candombe drumming. The ndings suggest that micro-timing is a structural component of the rhythm, producing a sort of characteristic "swing". The rest of the dissertation was devoted to the automatic inference and tracking of the metric structure from audio recordings. A supervised Bayesian scheme for rhythmic pattern tracking was proposed, of which a software implementation was publicly released. The results give additional evidence of the generalizability of the Bayesian approach to complex rhythms from diferent music traditions. Finally, the downbeat detection task was formulated as a data compression problem. This resulted in a novel method that proved to be e ective for a large part of the dataset and opens up some interesting threads for future research.La mayoría de la investigación realizada en tecnologías de la información aplicadas a la música se ha limitado en gran medida a algunos estilos particulares de la así llamada música `occidental'. Las herramientas resultantes a menudo no generalizan adecuadamente o no se pueden extender fácilmente a otras tradiciones musicales. Por lo tanto, recientemente se han propuesto enfoques culturalmente específicos como forma de construir modelos computacionales más ricos y más generales. Esta tesis tiene como objetivo contribuir al estudio del ritmo asistido por computadora, desde una perspectiva cultural específica, considerando el candombe Afro-Uruguayo como caso de estudio. Esto está motivado principalmente por sus características rítmicas, problemáticas para la mayoría de los métodos de análisis existentes. Así , intenta superar los límites actuales de estas tecnologías. La tesis ofrece una visión general del contexto histórico, social y cultural en el que el candombe está integrado, junto con una descripción de su ritmo. Una de las contribuciones específicas de la tesis es la creación de conjuntos de datos adecuados para el análisis computacional del ritmo. Se llevaron adelante sesiones de grabación y se generaron anotaciones de información métrica, ubicación de eventos y secciones. Se disponibilizó públicamente un conjunto de grabaciones anotadas para el seguimiento de pulso e inicio de compás, y se generó un registro audiovisual que sirve tanto para fines documentales como de investigación. Parte de la tesis se centró en descubrir y analizar patrones rítmicos a partir de grabaciones de audio. Se diseñó una representación en forma de mapa de patrones rítmicos basada en características espectrales. El tipo de análisis que se puede realizar con los métodos propuestos se ilustra con algunos experimentos. La tesis también abordó de forma sistemática (y por primera vez) el estudio y la caracterización de las propiedades micro rítmicas del candombe. Los resultados sugieren que las micro desviaciones temporales son un componente estructural del ritmo, dando lugar a una especie de "swing" característico. El resto de la tesis se dedicó a la inferencia automática de la estructura métrica a partir de grabaciones de audio. Se propuso un esquema Bayesiano supervisado para el seguimiento de patrones rítmicos, del cual se disponibilizó públicamente una implementación de software. Los resultados dan evidencia adicional de la capacidad de generalización del enfoque Bayesiano a ritmos complejos. Por último, la detección de inicio de compás se formuló como un problema de compresión de datos. Esto resultó en un método novedoso que demostró ser efectivo para una buena parte de los datos y abre varias líneas de investigación

    Real-time online musical collaboration system for Indian percussion

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.Includes bibliographical references (p. 111-119).Thanks to the Internet, musicians located in different countries can now aspire to play with each other almost as if they were in the same room. However, the time delays due to the inherent latency in computer networks (up to several hundreds of milliseconds over long distances) are unsuitable for musical applications. Some musical collaboration systems address this issue by transmitting compressed audio streams (such as MP3) over low-latency and high-bandwidth networks (e.g. LANs or Internet2) to constrain time delays and optimize musician synchronization. Other systems, on the contrary, increase time delays to a musically-relevant value like one phrase, or one chord progression cycle, and then play it in a loop, thereby constraining the music being performed. In this thesis I propose TablaNet, a real-time online musical collaboration system for the tabla, a pair of North Indian hand drums. This system is based on a novel approach that combines machine listening and machine learning. Trained for a particular instrument, here the tabla, the system recognizes individual drum strokes played by the musician and sends them as symbols over the network. A computer at the receiving end identifies the musical structure from the incoming sequence of symbols by mapping them dynamically to known musical constructs. To deal with transmission delays, the receiver predicts the next events by analyzing previous patterns before receiving the original events, and synthesizes an audio output estimate with the appropriate timing. Although prediction approximations may result in a slightly different musical experience at both ends, we find that this system demonstrates a fair level of playability by tabla players of various levels, and functions well as an educational tool.by Mihir Sarkar.S.M
    corecore