24 research outputs found

    Antescofo Intermediate Representation

    Get PDF
    We describe an intermediate language designed as a medium-level internal representation of programs of the interactive music system Antescofo. This representation is independent both of the Antescofo source language and of the architecture of the execution platform. It is used in tasks such as verification of timings, model-based conformance testing, static control-flow analysis or simulation. This language is essentially a flat representation of Antescofo's code, as a finite state machine extended with local and global variables, with delays and with concurrent threads creation. It features a small number of simple instructions which are either blocking (wait for external event, signal or duration) or not (variable assignment, message emission and control)

    Transcription, Adaptation and Maintenance in Live Electronic Performance with Acoustic Instruments

    Get PDF
    This paper examines processes of musical adaptation in a live electronic context, taking as a case study the authors’ collaborative work transcribing Richard Dudas’ Prelude No.1 for flute and computer (2005), to a new version for clarinet and live electronics, performed in the Spring of 2014 by clarinettist Pete Furniss. As such, the idea of transcription and its implications are central to this study. We will additionally address some of the salient information that the user interface in a piece of interactive electro-instrumental music should present to the performer, as well as some possible ways of restructuring not only the interface itself, but also the deroulement of the piece to aid the solo performer to the maximum degree possible. A secondary focus of the paper is to underline the need for the creation of a body of musical works that are technically straightforward enough to serve as an introduction to live electronic performance for musicians who might otherwise be daunted by the demands of the existing repertoire

    Multimedia scheduling for interactive multimedia systems

    Get PDF
    International audienceScheduling for real-time interactive multimedia systems (IMS) raises specific challenges that require particular attention. Examples are triggering and coordination of heterogeneous tasks, especially for IMS that use a physical time, but also a musical time that depends on a particular performance, and how tasks that deal with audio processing interact with control tasks. Moreover , IMS have to ensure a timed scenario, for instance specified in an augmented musical score, and current IMS do not deal with their reliability and predictability. We present how to formally interleave audio processing with control by using buffer types that represent audio buffers and the way of interrupting computations that occur on them, and how to check the property of time-safety of IMS timed scenarios, in particular augmented scores for the IMS Antescofo for automatic accompaniment developed at Ircam. Our approach is based on the extension of an intermediate representation similar to the E code of the real-time embedded programming language Giotto, and on static analysis procedures run on the graph of the intermediate representation

    Test Methods for Score-Based Interactive Music Systems

    Get PDF
    International audienceScore-Based Interactive Music Systems (SBIMS) are involved in live performances with human musicians, reacting in realtime to audio signals and asynchronous incoming events according to a pre-specified timed scenario called a mixed score. This implies strong requirements of reliability and robustness to unforeseen errors in input. In this paper, we present the application of formal methods for black-box conformance testing of embedded systems to SBIMS's. We describe how we have handled the 3 main problems in automatic testing reactive and realtime software like SBIMS: (i) the generation of relevant input data for testing, including delay values, with the sake of exhaustiveness, (ii) the computation of the corresponding expected output, according to a given mixed score, (iii) the test execution on input and verdict. The results obtained from this formal test method have permitted to identify bugs in the SBIMS Antescofo.Les systèmes d'interaction musicale basés sur partition (SBIMS) évoluent directement avec des musiciens humains lors de performances, réagissant en temps-réel à des signaux audios et des évènements asynchrones selon un scénario temporisé pré-spécifié appelé une partition mixte. Cela demande une forte exigence sur la fiabilité et la robustesse face aux erreurs, imprévisibles, des entrées. Dans ce papier nous présentons l'application sur les SBIMS de méthodes formelles pour des tests de conformance boite-noire de systèmes embarqués. Nous décrivons comment nous avons traité 3 problèmes principaux des tests automatiques de systèmes réactifs et temps réel : (1) la génération d'entrées pertinentes pour le test, délais compris, avec pour but l'exhaustivité, (2) le calcul de la sortie attendue correspondante, selon une partition mixte, (3) l'exécution des tests sur une entrée et la déroulement du verdict. Le résultat obtenu de cette méthode de test formelle a permis d'identifier des bugs sur le SBIMS Antescofo

    Un framework automatique de test pour système interactif musical

    Get PDF
    International audienceScore-Based Interactive Music Systems are involved in live performances with human musicians, reacting in realtime to audio signals and asynchronous incoming events according to a pre-specified timed scenario called mixed score. Building such a system is a difficult challenge implying strong require- ments of reliability and robustness to unforeseen errors in input.We present the application to an automatic accompaniment system of formal methods for conformance testing of critical embedded systems. Our approach is fully automatic and based on formal models constructed directly from mixed scores, specifying the behavior expected from the system when playing with musicians. It has been applied to real mixed scores and the results obtained have permitted to identify bugs in the tested system

    Model-Based Testing for Building Reliable Realtime Interactive Music Systems

    Get PDF
    International audienceThe role of an Interactive Music System (IMS) is to accompany musicians during live performances, acting like a real musician. It must react in realtime to audio signals from musicians, according to a timed high-level requirement called mixed score, written in a domain specific language. Such goals imply strong requirements of temporal reliability and robustness to unforeseen errors in input, yet not much addressed by the computer music community. We present the application of Model-Based Testing techniques and tools to a state-of-the-art IMS, including in particular: offline and on-the-fly approaches for the generation of relevant input data for testing (including timing values), with coverage criteria, the computation of the corresponding expected output, according to the semantics of a given mixed score, the black-box execution of the test data on the System Under Test and the production of a verdict. Our method is based on formal models in a dedicated intermediate representation, compiled directly from mixed scores (high-level requirements), and either passed, to the model-checker Uppaal (after conversion to Timed Automata) in the offline approach, or executed by a virtual machine in the online approach. Our fully automatic framework has been applied to real mixed scores used in concerts and the results obtained have permitted to identify bugs in the target IMS

    Music Information Retrieval Meets Music Education

    Get PDF
    This paper addresses the use of Music Information Retrieval (MIR) techniques in music education and their integration in learning software. A general overview of systems that are either commercially available or in research stage is presented. Furthermore, three well-known MIR methods used in music learning systems and their state-of-the-art are described: music transcription, solo and accompaniment track creation, and generation of performance instructions. As a representative example of a music learning system developed within the MIR community, the Songs2See software is outlined. Finally, challenges and directions for future research are described

    The expressive function in wor songs

    Get PDF
    [Abstract] We study some musical and expressive features of traditional Wor vocal music, an ancestral gender of the Biaks (Indonesia). A core aspect in Wor songs is the expression of wonder, which Biaks have developed into an Aesthetics of Surprise [1, 2]. We describe some key structural features in the pitch and time domain used as means to express such an aesthetics. We represent the acoustic and prosodic features encoding expressive content by means of an Expressive Function which contains expressive indices with internal structure [3, 4]. We propose an augmented expressive score [5] for the transcription of unaccompanied Wor songs.[Resumen] En este trabajo estudiamos la estructura musical y la técnica empleada en el canto no acompañado de la música vocal Wor, un género ancestral de música creado por el pueblo Biak (Indonesia) para celebrar momentos importantes de su historia o de su vida cotidiana. Un aspecto crucial del género Wor es la expresión de sorpresa y maravilla, que el pueblo Biak ha elaborado en una Estética del Asombro. Analizamos algunos de los rasgos cruciales en el dominio de la frecuencia y del tiempo que se emplean para expresar emociones y afectos de maravilla y asombro. Representamos los rasgos acústicos y prosódicos de contenido expresivo mediante una Función Expresiva, que contiene índices expresivos con estructura interna. Proponemos una Partitura Expresiva Aumentada como transcripción de la música vocal Wor cantada a capella

    Linking Sheet Music and Audio - Challenges and New Approaches

    Get PDF
    Score and audio files are the two most important ways to represent, convey, record, store, and experience music. While score describes a piece of music on an abstract level using symbols such as notes, keys, and measures, audio files allow for reproducing a specific acoustic realization of the piece. Each of these representations reflects different facets of music yielding insights into aspects ranging from structural elements (e.g., motives, themes, musical form) to specific performance aspects (e.g., artistic shaping, sound). Therefore, the simultaneous access to score and audio representations is of great importance. In this paper, we address the problem of automatically generating musically relevant linking structures between the various data sources that are available for a given piece of music. In particular, we discuss the task of sheet music-audio synchronization with the aim to link regions in images of scanned scores to musically corresponding sections in an audio recording of the same piece. Such linking structures form the basis for novel interfaces that allow users to access and explore multimodal sources of music within a single framework. As our main contributions, we give an overview of the state-of-the-art for this kind of synchronization task, we present some novel approaches, and indicate future research directions. In particular, we address problems that arise in the presence of structural differences and discuss challenges when applying optical music recognition to complex orchestral scores. Finally, potential applications of the synchronization results are presented
    corecore