820 research outputs found

    Spectral analysis for nonstationary audio

    Full text link
    A new approach for the analysis of nonstationary signals is proposed, with a focus on audio applications. Following earlier contributions, nonstationarity is modeled via stationarity-breaking operators acting on Gaussian stationary random signals. The focus is on time warping and amplitude modulation, and an approximate maximum-likelihood approach based on suitable approximations in the wavelet transform domain is developed. This paper provides theoretical analysis of the approximations, and introduces JEFAS, a corresponding estimation algorithm. The latter is tested and validated on synthetic as well as real audio signal.Comment: IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, In pres

    PERFORMANCE FOLLOWING: TRACKING A PERFORMANCE WITHOUT A SCORE

    Get PDF
    EPSRC Doctoral Training Award; EPSRC Leadership Fellowshi

    Parallel Online Time Warping for Real-Time Audio-to-Score Alignment in Multi-core Systems

    Full text link
    [EN] The Audio-to-Score framework consists of two separate stages: pre- processing and alignment. The alignment is commonly solved through offline Dynamic Time Warping (DTW), which is a method to find the path over the distortion matrix with the minimum cost to determine the relation between the performance and the musical score times. In this work we propose a par- allel online DTW solution based on a client-server architecture. The current version of the application has been implemented for multi-core architectures (x86, x64 and ARM), thus covering either powerful systems or mobile devices. An extensive experimentation has been conducted in order to validate the software. The experiments also show that our framework allows to achieve a good score alignment within the real-time window by using parallel computing techniques.This work has been partially supported by Spanish Ministry of Science and Innovation and FEDER under Projects TEC2012-38142-C04-01, TEC2012-38142-C04-03, TEC2012-38142-C04-04, TEC2015-67387-C4-1-R, TEC2015-67387-C4-3-R, TEC2015-67387-C4-4-R, the European Union FEDER (CAPAP-H5 network TIN2014-53522-REDT), and the Generalitat Valenciana under Grant PROMETEOII/2014/003.Alonso-Jordá, P.; Cortina, R.; Rodríguez-Serrano, F.; Vera-Candeas, P.; Alonso-González, M.; Ranilla, J. (2017). Parallel Online Time Warping for Real-Time Audio-to-Score Alignment in Multi-core Systems. The Journal of Supercomputing. 73(1):126-138. https://doi.org/10.1007/s11227-016-1647-5S126138731Joder C, Essid S, Richard G (2011) A conditional random field framework for robust and scalable audio-to-score matching. IEEE Trans Speech Audio Lang Process 19(8):2385–2397McNab RJ, Smith LA, Witten IH, Henderson CL, Cunningham SJ (1996) Towards the digital music library: tune retrieval from acoustic input. In: DL 96: Proceedings of the first ACM international conference on digital libraries. ACM, New York, pp 11–18Dannenberg RB (2007) An intelligent multi-track audio editor. In: Proceedings of international computer music conference (ICMC), vol 2, pp 89–94Duan Z, Pardo B (2011) Soundprism: an online system for score-informed source separation of music audio. IEEE J Sel Topics Signal Process 5(6):1205–1215Dixon S (2005) Live tracking of musical performances using on-line time warping. In: Proceedings of the international conference on digital audio effects (DAFx), Madrid, Spain, pp 92–97Orio N, Schwarz D (2001) Alignment of monophonic and polyphonic music to a score. In: Proceedings of the international computer music conference (ICMC), pp 129–132Simon I, Morris D, Basu S (2008) MySong: automatic accompaniment generation for vocal melodies. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, pp 725–734Rodriguez-Serrano FJ, Duan Z, Vera-Candeas P, Pardo B, Carabias-Orti JJ (2015) Online score-informed source separation with adaptive instrument models. J New Music Res Lond 44(2):83–96Arzt A, Widmer G, Dixon S (2008) Automatic page turning for musicians via real-time machine listening. In: Proceedings of the 18th European conference on artificial intelligence. IOS Press, Amsterdam, pp 241–245Carabias-Orti JJ, Rodriguez-Serrano FJ, Vera-Candeas P, Canadas-Quesada FJ, Ruiz-Reyes N (2015) An audio to score alignment framework using spectral factorization and dynamic time warping. In: 16th International Society for music information retrieval conference, pp 742–748Rodríguez-Serrano FJ, Menéndez-Canal J, Vidal A, Cañadas-Quesada FJ, Cortina R (2015) A DTW based score following method for score-informed sound source separation. In: Proceedings of the 12th sound and music computing conference 2015 (SMC-15), Ireland, pp 491–496Carabias-Ortí JJ, Rodríguez-Serrano FJ, Vera-Candeas P, Cañadas-Quesada FJ, Ruíz-Reyes N (2013) Constrained non-negative sparse coding using learnt instrument templates for realtime music transcription. Eng Appl Artif Intell 26(7):1671–1680Raphael C (2006) Aligning music audio with symbolic scores using a hybrid graphical model. Mach Learn 65:389–409Schreck-Ensemble (2001–2004) ComParser 1.42. http://home.hku.nl/~pieter.suurmond/SOFT/CMP/doc/cmp.html . Accessed Sept 2015Itakura F (1975) Minimum prediction residual principle applied to speech recognition. IEEE Trans Acoust Speech Signal Process 23:52–72Dannenberg R, Hu N (2003) Polyphonic audio matching for score following and intelligent audio editors. In: Proceedings of the international computer music conference. International Computer Music Association, San Francisco, pp 27–34Mueller M, Kurth F, Roeder T (2004) Towards an efficient algorithm for automatic score-to-audio synchronization. In: Proceedings of the 5th international conference on music information retrieval, Barcelona, SpainMueller M, Mattes H, Kurth F (2006) An efficient multiscale approach to audio synchronization. In: Proceedings of the 7th international conference on music information retrieval, Victoria, CanadaKaprykowsky H, Rodet X (2006) Globally optimal short-time dynamic time warping applications to score to audio alignment. In: IEEE ICASSP, Toulouse, France, pp 249–252Fremerey C, Müller M, Clausen M (2010) Handling repeats and jumps in score-performance synchronization. In: Proceedings of ISMIR, pp 243–248Arzt A, Widmer G (2010) Towards effective any-time music tracking. In: Proceedings of starting AI researchers symposium (STAIRS), Lisbon, Portugal, pp 24–3

    Multimodal music information processing and retrieval: survey and future challenges

    Full text link
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion, and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years

    A Human-Computer Duet System for Music Performance

    Full text link
    Virtual musicians have become a remarkable phenomenon in the contemporary multimedia arts. However, most of the virtual musicians nowadays have not been endowed with abilities to create their own behaviors, or to perform music with human musicians. In this paper, we firstly create a virtual violinist, who can collaborate with a human pianist to perform chamber music automatically without any intervention. The system incorporates the techniques from various fields, including real-time music tracking, pose estimation, and body movement generation. In our system, the virtual musician's behavior is generated based on the given music audio alone, and such a system results in a low-cost, efficient and scalable way to produce human and virtual musicians' co-performance. The proposed system has been validated in public concerts. Objective quality assessment approaches and possible ways to systematically improve the system are also discussed

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures

    Automatic transcription of Turkish makam music

    Get PDF
    In this paper we propose an automatic system for transcribing/nmakam music of Turkey. We document the specific/ntraits of this music that deviate from properties that/nwere targeted by transcription tools so far and we compile/na dataset of makam recordings along with aligned microtonal/nground-truth. An existing multi-pitch detection algorithm/nis adapted for transcribing music in 20 cent resolution,/nand the final transcription is centered around the/ntonic frequency of the recording. Evaluation metrics for/ntranscribing microtonal music are utilized and results show/nthat transcription of Turkish makam music in e.g. an interactive/ntranscription software is feasible using the current/nstate-of-the-art.This work is partly supported by the European/nResearch Council under the European Union’s Seventh/nFramework Program, as part of the CompMusic project/n(ERC grant agreement 267583)
    • …
    corecore