15 research outputs found

    Embedding native audio-processing in a score following system with quasi sample accuracy

    Get PDF
    International audienceThis paper reports on the experimental native embedding of audio processing into the Antescofo system, to leverage timing precision both at the program and system level, to accommodate time-driven (audio processing) and event-driven (control) computations, and to preserve system behaviour on multiple hardware platforms. Here native embedding means that audio computations can be specified using dedicated DSLs (e.g., Faust) compiled on-the-fly and driven by the Antescofo scheduler. We showcase results through an example of an interactive piece by composer Pierre Boulez, Anthèmes 2 for violin and live electronics

    Multimedia scheduling for interactive multimedia systems

    Get PDF
    International audienceScheduling for real-time interactive multimedia systems (IMS) raises specific challenges that require particular attention. Examples are triggering and coordination of heterogeneous tasks, especially for IMS that use a physical time, but also a musical time that depends on a particular performance, and how tasks that deal with audio processing interact with control tasks. Moreover , IMS have to ensure a timed scenario, for instance specified in an augmented musical score, and current IMS do not deal with their reliability and predictability. We present how to formally interleave audio processing with control by using buffer types that represent audio buffers and the way of interrupting computations that occur on them, and how to check the property of time-safety of IMS timed scenarios, in particular augmented scores for the IMS Antescofo for automatic accompaniment developed at Ircam. Our approach is based on the extension of an intermediate representation similar to the E code of the real-time embedded programming language Giotto, and on static analysis procedures run on the graph of the intermediate representation

    Operational semantics of a domain specific language for real time musician-computer interaction

    Get PDF
    International audienceWith the advent and availability of powerful personal computing, the computer music research and industry have been focusing on real-time musical interactions between musicians and computers; delegating human-like actions to computers who interact with a musical environment. One common use-case of this kind is Automatic Accompaniment where the system is comprised of a real-time machine listening system that in reaction to recognition of events in a score from a human performer, launches necessary actions for the accompaniment section. While the real-time detection of score events out of live musicians' performance has been widely addressed in the literature, score accompaniment (or the reactive part of the process) has been rarely discussed. This paper deals with this missing component in the literature from a formal language perspective. We show how language considerations would enable better authoring of time and interaction during programming/composing and how it addresses critical aspects of a musical performance (such as errors) in real-time. We sketch the real-time features required by automatic musical accompaniment seen as a reactive system. We formalize the timing strategies for musical events taking into account the various temporal scales used in music. Various strategies for the handling of synchronization constraints and the handling of errors are presented. We give a formal semantics to model the possible behaviors of the system in terms of Parametric Timed Automata

    The arpeggigon: declarative programming of a full-fledged musical application

    Get PDF
    There are many systems and languages for music that essentially are declarative, often following the synchronous dataflow paradigm. As these tools, however, are mainly aimed at artists, their application focus tends to be narrow and their usefulness as general purpose tools for developing musical applications limited, at least if one desires to stay declarative. This paper demonstrates that Functional Reactive Programming (FRP) in combination with Reactive Values and Relations (RVR) is one way of addressing this gap. The former, in the synchronous dataflow tradition, aligns with the temporal and declarative nature of music, while the latter allows declarative interfacing with external components as needed for full-fledged musical applications. The paper is a case study around the development of an interactive cellular automaton for composing groove-based music. It illustrates the interplay between FRP and RVR as well as programming techniques and examples generally useful for musical, and other time-aware, interactive applications

    Funky grooves: declarative programming of full-fledged musical applications

    Get PDF
    There are many systems and languages for music that essentially are declarative, often following the synchronous dataflow paradigm. As these tools, however, are mainly aimed at artists, their application focus tends to be narrow and their usefulness as general purpose tools for developing musical applications limited, at least if one desires to stay declarative. This paper demonstrates that Functional Reactive Programming (FRP) in combination with Reactive Values and Relations (RVR) is one way of addressing this gap. The former, in the synchronous dataflow tradition, aligns with the temporal and declarative nature of music, while the latter allows declarative interfacing with external components as needed for full-fledged musical applications. The paper is a case study around the development of an interactive cellular automaton for composing groove-based music

    Un langage de programmation pour composer l'interaction musicale : la gestion du temps et des événements dans Antescofo

    Get PDF
    Mixed music is the association in live performance of human musicians and computer mediums, interacting in real-time. Authoring the interaction between the humans and the electronic processes, as well as their real-time implementation, challenge computer science in several ways. This contribution presents the Antescofo real-time system and its domain specific language. Using this language a composer is able to describe temporal scenarios where electronic musical processes are computed and scheduled in interaction with a live musician performance. Antescofo couples artificial machine listening with a reactive and temporized system. The challenge in bringing human actions in the loop of computing is strongly related the specification and the management of multiple time frameworks and timeliness of live execution despite heterogeneous nature of time in the two mediums. Interaction scenarios are expressed at a symbolic level through the management of musical time (i.e., events like notes or beats in relative tempi) and of the physical time (with relationships like succession, delay, duration, speed between the occurrence of the events during the performance on stage). Antescofo unique features are presented through a series of examples which illustrate how to manage execution of different audio processes through time and their interactions with an external environment. The Antescofo approach has been validated through numerous uses of the system in live electronic performances in contemporary music repertoire by various international music ensembles.La musique mixte se caractérise par l’association de musiciens instrumentistes et de processus électroniques pendant une performance. Ce domaine soulève des problématiques sur l’écriture de cette interaction et sur les mécanismes qui permettent d’exécuter des programmes dans un temps partagé avec les musiciens.Ce travail présente le système temps réel Antescofo et son langage dédié. Il permet de décrire des scénarios temporels où des processus sont calculés et ordonnancés en interaction avec un environnement musical. Antescofo couple un système de suivi de partition avec un système réactif temporisé.L’originalité du système réside dans la sémantique temporelle du langage adaptée aux caractéristiques critiques de l’interaction musicale. Le temps et les événements peuvent s’exprimer de façon symbolique dans une échelle en secondes ou dans des échelles relatives à des tempos.Nous présenterons les domaines de recherche apparentés à Antescofo en musique et en informatique, les caractéristiques du langage et de la partie réactive d’Antescofo qui ont été développés pendant cette thèse en particulier les stratégies synchronisations et les différents contrôles du temps et des évènements permis par le système. Nous donnerons une sémantique du langage qui formalise le fonctionnement du moteur d’exécution. À travers une série d’exemples d’applications issues de collaborations artistiques, nous illustrerons les interactions temporelles qu’il faut gérer entre une machine et un instrumentiste lors d’un concert. Le système a pu être validé à travers de nombreux concerts par différents orchestres d’envergure internationale

    Practical examination of computer presence in electro-instrumental music

    Get PDF
    This thesis explores the following questions: What is the influence of algorithmic software on the composition process? How can spectromorphologies be manipulated in search of coherent and lucid coupling in electro-instrumental (EI) music? What are the practical implications of the performance of EI music? This thesis will unfold practicalities, creative approaches, and new directions for the practice of EI music, drawing together spectromorphological theory and instrumental techniques. Framed around a body of work for solo instrument/ensemble with computer, I will assess each aspect of my musical process. Musical vocabularies, grammatical organisation and collaborative performance practices will be discussed. Specifically, my research breaks down components of composition into context, materials and an attempt towards categorisation and grammatical organisation including spectral and algorithmic techniques. With the knowledge that the computer has influence on the music making process, I identify and discuss some of its key contributions. Additionally, knowing that the tools and spaces that facilitate performance also impact the music, I seek to understand how these tools and environments contribute in order to get the best musical responses from them. Collaboration is a key theme, and throughout the thesis I pay attention to performer presence in the music making process. This thesis should be read in conjunction with my submitted portfolio for relevant case studies and musical examples

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    Música mista e sistemas de relações dinâmicas

    Get PDF
    O presente estudo compreende a reflexão teórica e a criação artística em torno da música mista, género afecto à informática musical. A música mista é um género musical, que foi estabelecido desde a segunda metade do séc. XX. Pode ser definida como a junção do meio acústico com o meio electrónico, mais concretamente: a combinação na performação de um ou mais instrumentos acústicos com sons criados, processados ou reproduzidos electronicamente. As duas principais estratégias deste género musical - tempo-diferido e tempo-real, apresentam tanto vantagens como constrangimentos que influenciam a liberdade de uma expressão performativa. A partir de questões levantadas por esta problemática e dos vários tipos de relações que se estabelecem durante uma performação, formulámos a hipótese que as relações que privilegiam a expressividade musical são as relações entre humanos e não as relações homem-tecnologia. No entanto, a liberdade de expressão performativa é conseguida quando o meio electroacústico e o meio acústico se encontram em pé de igualdade. Esta implica que o meio electroacústico deve ser maleável de modo a que seja performável a partir de um sistema que se comporte de uma forma instrumental. Para a verificação destas afirmações empreendermos uma visão sistemática das problemáticas em volta da performação de música mista, abordando separadamente os seus dois elementos fundamentais: o factor tecnológico e o factor humano. Após este estudo paralelo, efectuámos a convergência destes que culmina com a proposição de um modelo conceptual de performação que se baseia na importância da interacção musical entre performadores e que esta é um corolário da abertura à possibilidade de expressão performativa. Este modelo, que designamos como Sistema de Relações Dinâmicas, é um conceito cujas implicações afectam tanto a performação como o acto criativo.Esta investigação inclui uma vertente prática com diversas aplicações deste conceito. A partir desta, foi-nos permitido deduzir graus de aplicabilidade deste modelo proposto e assim verificar a sua viabilidade.This dissertation comprises a theoretical study and artistic creation on mixed music. Mixed music, as a musical genre, was established since the second half of the twentieth century. It can be defined as the combination of acoustic and electronic means. More specifically, the combination in a performance of one or more acoustic instruments with electronically generated, processed or reproduced sounds. Both the two main strategies, used in this musical genre - deferred-time and real-time, present advantages and constraints which have a strong influence on the freedom of performance expression. Starting from questions raised about these problems and from the several kinds of relations established during a performance, we formulated the hypothesis that the relations which privilege musical expressiveness are those between human and not between human and technology. However performance expression freedom is accomplished when both acoustic and electronic means are on equal footing. This implies that the electroacustic mean should be flexible in a way that can be performable with a system that behaves like an instrument. To verify these claims we undertake a systematic study on the problems around mixed music performance, approaching separately its fundamental elements: the human factor and the technological factor. After this parallel study, we performed a convergence of these factors culminating with the proposition of a performance conceptual model based on the importance of musical interaction between performers and that this is a corollary of the possibility of performative expression. This model, which we refer to as Dynamic Relations Systems, is a concept whose implications affect both performance and creative act. This research includes a practical feature with several applications of this concept. On this basis, we were allowed to deduct this proposed model’s degrees of applicability and thus verify its viability
    corecore