8,885 research outputs found

    A Coupled Duration-Focused Architecture for Real-Time Music-to-Score Alignment

    Get PDF
    International audienceThe capacity for realtime synchronization and coordination is a common ability among trained musicians performing a music score that presents an interesting challenge for machine intelligence. Compared to speech recognition, which has influenced many music information retrieval systems, music's temporal dynamics and complexity pose challenging problems to common approximations regarding time modeling of data streams. In this paper, we propose a design for a realtime music to score alignment system. Given a live recording of a musician playing a music score, the system is capable of following the musician in realtime within the score and decoding the tempo (or pace) of its performance. The proposed design features two coupled audio and tempo agents within a unique probabilistic inference framework that adaptively updates its parameters based on the realtime context. Online decoding is achieved through the collaboration of the coupled agents in a Hidden Hybrid Markov/semi-Markov framework where prediction feedback of one agent affects the behavior of the other. We perform evaluations for both realtime alignment and the proposed temporal model. An implementation of the presented system has been widely used in real concert situations worldwide and the readers are encouraged to access the actual system and experiment the results

    An efficient musical accompaniment parallel system for mobile devices

    Full text link
    [EN] This work presents a software system designed to track the reproduction of a musical piece with the aim to match the score position into its symbolic representation on a digital sheet. Into this system, known as automated musical accompaniment system, the process of score alignment can be carried out real-time. A real-time score alignment, also known as score following, poses an important challenge due to the large amount of computation needed to process each digital frame and the very small time slot to process it. Moreover, the challenge is even greater since we are interested on handheld devices, i.e. devices characterized by both low power consumption and mobility. The results presented here show that it is possible to exploit efficiently several cores of an ARM(A (R)) processor, or a GPU accelerator (presented in some SoCs from NVIDIA) reducing the processing time per frame under 10 ms in most of the cases.This work was supported by the Ministry of Economy and Competitiveness from Spain (FEDER) under projects TEC2015-67387-C4-1-R, TEC2015-67387-C4-2-R and TEC2015-67387-C4-3-R, the Andalusian Business, Science and Innovation Council under project P2010-TIC-6762 (FEDER), and the Generalitat Valenciana PROMETEOII/2014/003Alonso-Jordá, P.; Vera-Candeas, P.; Cortina, R.; Ranilla, J. (2017). An efficient musical accompaniment parallel system for mobile devices. The Journal of Supercomputing. 73(1):343-353. https://doi.org/10.1007/s11227-016-1865-xS343353731Cont A, Schwarz D, Schnell N, Raphael C (2007) Evaluation of real- time audio-to-score alignment. In: Proc. of the International Conference on Music Information Retrieval (ISMIR) 2007, ViennaArzt A (2008) Score following with dynamic time warping. An automatic page-turner. Master’s Thesis, Vienna University of Technology, ViennaRaphael C (2010) Music plus one and machine learning. In: Proc. of the 27 th International Conference on Machine Learning, Haifa, pp 21–28Carabias-Ortí JJ, Rodríguez-Serrano FJ, Vera-Candeas P, Ruiz-Reyes N, Cañadas-Quesada FJ (2015) An audio to score alignment framework using spectral factorization and dynamic time warping. In: Proc. of the International Conference on Music Information Retrieval (ISMIR), Málaga, pp 742–748Cont A (2010) A coupled duration-focused architecture for real-time music-to-score alignment. IEEE Trans. Pattern Anal. Mach. Intell. 32(6):974–987Montecchio N, Orio N (2009) A discrete filterbank approach to audio to score matching for score following. In: Proc. of the International Conference on Music Information Retrieval (ISMIR), pp 495–500Puckette M (1995) Score following using the sung voice. In: Proc. of the International Computer Music Conference (ICMC), pp 175–178Duan Z, Pardo B (2011) Soundprism: an online system for score-informed source separation of music audio. IEEE J. Sel. Top. Signal Process. 5(6):1205–1215Cont A (2006) Realtime audio to score alignment for polyphonic music instruments using sparse non-negative constraints and hierarchical hmms. In: Proc. of IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), ToulouseCuvillier P, Cont A (2014) Coherent time modeling of Semi-Markov models with application to realtime audio-to-score alignment. In Proc. of the 2014 IEEE International Workshop on Machine Learning for Signal Processing, p 16Joder C, Essid S, Richard G (2013) Learning optimal features for polyphonic audio-to-score alignment. IEEE Trans. Audio Speech Lang. Process. 21(10):2118–2128Dixon S (2005) Live tracking of musical performances using on-line time warping. In: Proc. International Conference on Digital Audio Effects (DAFx), Madrid, pp 92–97Hu N, Dannenberg RB, Tzanetakis G (2009) Polyphonic audio matching and alignment for music retrieval. In: Proc. of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp 185–188Orio N, Schwarz D (2001) Alignment of monophonic and polyphonic music to a score. In: Proc. International Computer Music Conference (ICMC)Alonso P, Cortina R, Rodríguez-Serrano FJ, Vera-Candeas P, Alonso-Gonzalez M, Ranilla J (2016) Parallel online time warping for real-time audio-to-score alignment in multi-core systems. J. Supercomput. doi: 10.1007/s11227-016-1647-5 (published online)Carabias-Ortí JJ, Rodríguez-Serrano FJ, Vera-Candeas P, Cañadas-Quesada FJ, Ruiz-Reyes N (2013) Constrained non-negative sparse coding using learnt instrument templates for realtime music transcription, Eng. Appl. Artif. Intell. 26(7):1671–1680Carabias-Ortí JJ, Rodríguez-Serrano FJ, Vera-Candeas P, Martínez-Muñoz D (2016) Tempo driven audio-to-score alignment using spectral decomposition and online dynamic time warping. ACM Trans. Intell. Syst. Technol. (accepted)FFTW (2016) http://www.fftw.org . Accessed July 2016NVIDIA CUDA Fast Fourier Transform library (cuFFT) (2016) http://developer.nvidia.com/cufft . Accessed July 2016The OpenMP API specification for parallel programming (2016) http://openmp.org . Accessed July 201

    Performance Following: Real-Time Prediction of Musical Sequences Without a Score

    Get PDF
    (c)2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works

    Real-Time Audio-to-Score Alignment of Music Performances Containing Errors and Arbitrary Repeats and Skips

    Full text link
    This paper discusses real-time alignment of audio signals of music performance to the corresponding score (a.k.a. score following) which can handle tempo changes, errors and arbitrary repeats and/or skips (repeats/skips) in performances. This type of score following is particularly useful in automatic accompaniment for practices and rehearsals, where errors and repeats/skips are often made. Simple extensions of the algorithms previously proposed in the literature are not applicable in these situations for scores of practical length due to the problem of large computational complexity. To cope with this problem, we present two hidden Markov models of monophonic performance with errors and arbitrary repeats/skips, and derive efficient score-following algorithms with an assumption that the prior probability distributions of score positions before and after repeats/skips are independent from each other. We confirmed real-time operation of the algorithms with music scores of practical length (around 10000 notes) on a modern laptop and their tracking ability to the input performance within 0.7 s on average after repeats/skips in clarinet performance data. Further improvements and extension for polyphonic signals are also discussed.Comment: 12 pages, 8 figures, version accepted in IEEE/ACM Transactions on Audio, Speech, and Language Processin

    On the Creative Use of Score Following and Its Impact on Research

    Get PDF
    (Abstract to follow

    Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection

    Get PDF
    Sound events often occur in unstructured environments where they exhibit wide variations in their frequency content and temporal structure. Convolutional neural networks (CNN) are able to extract higher level features that are invariant to local spectral and temporal variations. Recurrent neural networks (RNNs) are powerful in learning the longer term temporal context in the audio signals. CNNs and RNNs as classifiers have recently shown improved performances over established methods in various sound recognition tasks. We combine these two approaches in a Convolutional Recurrent Neural Network (CRNN) and apply it on a polyphonic sound event detection task. We compare the performance of the proposed CRNN method with CNN, RNN, and other established methods, and observe a considerable improvement for four different datasets consisting of everyday sound events.Comment: Accepted for IEEE Transactions on Audio, Speech and Language Processing, Special Issue on Sound Scene and Event Analysi

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201
    corecore