127 research outputs found

    Online score-informed source separation in polyphonic mixtures using instrument spectral patterns

    Get PDF
    [EN] Soundprism is a real-time algorithm to separate polyphonic music audio into source signals, given the musical score of the audio in advance. This paper presents a framework for a Soundprism implementation. A study of the sound quality of the online score-informed source separation is shown, although a real-time implementation is not carried out. The system is compound of two stages: (1) a score follower that matches a MIDI score position to each time frame of the musical performance; and (2) a source separator based on a nonnegative matrix factorization approach guided by the score. Real audio mixtures composed of an instrumental quartets were employed to obtain preliminary results of the proposed system.Ministerio de Economía y Competitividad. Grant Number: TEC2015-67387-C4-{1, 2, 3}-RMuñoz-Montoro, A.; Vera-Candeas, P.; Cortina, R.; Combarro, EF.; Alonso-Jordá, P. (2019). Online score-informed source separation in polyphonic mixtures using instrument spectral patterns. Computational and Mathematical Methods. 1-10. https://doi.org/10.1002/cmm4.1040S11

    Single-channel source separation using non-negative matrix factorization

    Get PDF

    Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection

    Get PDF
    Sound events often occur in unstructured environments where they exhibit wide variations in their frequency content and temporal structure. Convolutional neural networks (CNN) are able to extract higher level features that are invariant to local spectral and temporal variations. Recurrent neural networks (RNNs) are powerful in learning the longer term temporal context in the audio signals. CNNs and RNNs as classifiers have recently shown improved performances over established methods in various sound recognition tasks. We combine these two approaches in a Convolutional Recurrent Neural Network (CRNN) and apply it on a polyphonic sound event detection task. We compare the performance of the proposed CRNN method with CNN, RNN, and other established methods, and observe a considerable improvement for four different datasets consisting of everyday sound events.Comment: Accepted for IEEE Transactions on Audio, Speech and Language Processing, Special Issue on Sound Scene and Event Analysi

    Onsets and Velocities: Affordable Real-Time Piano Transcription Using Convolutional Neural Networks

    Full text link
    Polyphonic Piano Transcription has recently experienced substantial progress, driven by the use of sophisticated Deep Learning approaches and the introduction of new subtasks such as note onset, offset, velocity and pedal detection. This progress was coupled with an increased complexity and size of the proposed models, typically relying on non-realtime components and high-resolution data. In this work we focus on onset and velocity detection, showing that a substantially smaller and simpler convolutional approach, using lower temporal resolution (24ms), is still competitive: our proposed ONSETS&VELOCITIES model achieves state-of-the-art performance on the MAESTRO dataset for onset detection (F1=96.78%) and sets a good novel baseline for onset+velocity (F1=94.50%), while having ~3.1M parameters and maintaining real-time capabilities on modest commodity hardware. We provide open-source code to reproduce our results and a real-time demo with a pretrained model.Comment: Accepted at EUSIPCO 202

    Audio source separation for music in low-latency and high-latency scenarios

    Get PDF
    Aquesta tesi proposa mètodes per tractar les limitacions de les tècniques existents de separació de fonts musicals en condicions de baixa i alta latència. En primer lloc, ens centrem en els mètodes amb un baix cost computacional i baixa latència. Proposem l'ús de la regularització de Tikhonov com a mètode de descomposició de l'espectre en el context de baixa latència. El comparem amb les tècniques existents en tasques d'estimació i seguiment dels tons, que són passos crucials en molts mètodes de separació. A continuació utilitzem i avaluem el mètode de descomposició de l'espectre en tasques de separació de veu cantada, baix i percussió. En segon lloc, proposem diversos mètodes d'alta latència que milloren la separació de la veu cantada, gràcies al modelatge de components específics, com la respiració i les consonants. Finalment, explorem l'ús de correlacions temporals i anotacions manuals per millorar la separació dels instruments de percussió i dels senyals musicals polifònics complexes.Esta tesis propone métodos para tratar las limitaciones de las técnicas existentes de separación de fuentes musicales en condiciones de baja y alta latencia. En primer lugar, nos centramos en los métodos con un bajo coste computacional y baja latencia. Proponemos el uso de la regularización de Tikhonov como método de descomposición del espectro en el contexto de baja latencia. Lo comparamos con las técnicas existentes en tareas de estimación y seguimiento de los tonos, que son pasos cruciales en muchos métodos de separación. A continuación utilizamos y evaluamos el método de descomposición del espectro en tareas de separación de voz cantada, bajo y percusión. En segundo lugar, proponemos varios métodos de alta latencia que mejoran la separación de la voz cantada, gracias al modelado de componentes que a menudo no se toman en cuenta, como la respiración y las consonantes. Finalmente, exploramos el uso de correlaciones temporales y anotaciones manuales para mejorar la separación de los instrumentos de percusión y señales musicales polifónicas complejas.This thesis proposes specific methods to address the limitations of current music source separation methods in low-latency and high-latency scenarios. First, we focus on methods with low computational cost and low latency. We propose the use of Tikhonov regularization as a method for spectrum decomposition in the low-latency context. We compare it to existing techniques in pitch estimation and tracking tasks, crucial steps in many separation methods. We then use the proposed spectrum decomposition method in low-latency separation tasks targeting singing voice, bass and drums. Second, we propose several high-latency methods that improve the separation of singing voice by modeling components that are often not accounted for, such as breathiness and consonants. Finally, we explore using temporal correlations and human annotations to enhance the separation of drums and complex polyphonic music signals

    Identifying Missing and Extra Notes in Piano Recordings Using Score-Informed Dictionary Learning

    Get PDF

    A dynamic programming variant of non-negative matrix deconvolution for the transcription of struck string instruments

    Full text link
    corecore