14 research outputs found

    Introducing CatOracle: Corpus-based concatenative improvisation with the Audio Oracle algorithm

    Get PDF
    CATORACLE responds to the need to join high-level control of audio timbre with the organization of musical form in time. It is inspired by two powerful existing tools: CataRT for corpus-based concatenative synthesis based on the MUBU for MAX library, and PYORACLE for computer improvisation, combining for the first time audio descriptor analysis and learning and generation of musical structures. Harnessing a user-defined list of audio fea- tures, live or prerecorded audio is analyzed to construct an “Audio Oracle” as a basis for improvisation. CatOracle also extends features of classic concatenative synthesis to include live interactive audio mosaicking and score-based transcription using the BACH library for MAX. The project suggests applications not only to live performance of written and improvised electroacoustic music, but also computer-assisted composition and musical analysis

    Automatic Construction of Interactive Machine Improvisation Scenarios from Audio Recordings

    Get PDF
    International audienceWe describe a system that allows improvisers and composers to construct an interactive musical environment directly from a musical recording. Currently, interactive music pieces require separate phases of constructing generative models and structuring them into a larger compositional plan. In the proposed system we combine machine improvisation tools based on Variable Markov Oracle (VMO) with an interactive score (i-score) to control the improvisation according to larger structures found in that recording. This allows construction of improvisation scenarios in ways that are organic with the musical materials used for generating the music. The method uses new results of audio segmentation based on VMO and translates it into a Petri Net (PN) model with transition rules left open to be defined by a musician. The PN structure is finally translated into a timed representation for a live i-score control

    Enabling Embodied Analogies in Intelligent Music Systems

    Full text link
    The present methodology is aimed at cross-modal machine learning and uses multidisciplinary tools and methods drawn from a broad range of areas and disciplines, including music, systematic musicology, dance, motion capture, human-computer interaction, computational linguistics and audio signal processing. Main tasks include: (1) adapting wisdom-of-the-crowd approaches to embodiment in music and dance performance to create a dataset of music and music lyrics that covers a variety of emotions, (2) applying audio/language-informed machine learning techniques to that dataset to identify automatically the emotional content of the music and the lyrics, and (3) integrating motion capture data from a Vicon system and dancers performing on that music.Comment: 4 page

    Using Multidimensional Sequences For Improvisation In The OMax Paradigm

    Get PDF
    International audienceAutomatic music improvisation systems based on the OMax paradigm use training over a one-dimensional sequence to generate original improvisations. Different systems use different heuristics to guide the improvisation but none of these benefits from training over a multidimensional sequence. We propose a system creating improvisation in a closer way to a human improviser where the intuition of a context is enriched with knowledge. This system combines a probabilistic model taking into account the multidimen-sional aspect of music trained on a corpus, with a factor oracle. The probabilistic model is constructed by interpolating sub-models and represents the knowledge of the system, while the factor oracle (structure used in OMax) represents the context. The results show the potential of such a system to perform better navigation in the factor oracle, guided by the knowledge on several dimensions

    On Improvised Music, Computational Creativity and Human-Becoming

    Full text link
    Music improvisation is an act of human-becoming: of self-expression—an articulation of histories and memories that have molded its participants—and of exploration—a search for unimagined structures that break with the stale norms of majoritarian culture. Given that the former objective may inhibit the latter, we propose an integration of human musical improvisers and deliberately flawed creative software agents that are designed to catalyze the development of human-ratified minoritarian musical structures

    DYCI2 agents: merging the "free", "reactive", and "scenario-based" music generation paradigms

    Get PDF
    International audienceThe collaborative research and development project DYCI2, Creative Dynamics of Improvised Interaction, focuses on conceiving, adapting, and bringing into play efficient models of artificial listening, learning, interaction, and generation of musical contents. It aims at developing creative and autonomous digital musical agents able to take part in various human projects in an interactive and artistically credible way; and, in the end, at contributing to the perceptive and communicational skills of embedded artificial intelligence. The concerned areas are live performance, production, pedagogy, and active listening. This paper gives an overview focusing on one of the three main research issues of this project: conceiving multi-agent architectures and models of knowledge and decision in order to explore scenarios of music co-improvisation involving human and digital agents. The objective is to merge the usually exclusive "free" , "reactive", and "scenario-based" paradigms in interactive music generation to adapt to a wide range of musical contexts involving hybrid temporality and multimodal interactions

    Generating Equivalent Chord Progressions to Enrich Guided Improvisation : Application to Rhythm Changes

    Get PDF
    International audienceThis paper presents a method taking into account the form of a tune upon several levels of organisation to guide music generation processes to match this structure. We first show how a phrase structure grammar can represent a hierarchical analysis of chord progressions and be used to create multi-level progressions. We then explain how to exploit this multi-level structure of a tune for music generation and how it enriches the possibilities of guided machine improvisation. We illustrate our method on a prominent jazz chord progression called 'rhythm changes'. After creating a phrase structure grammar for 'rhythm changes' with a professional musician, the terminals of this grammar are automatically learnt on a corpus. Then, we generate melodic improvisations guided by multi-level progressions created by the grammar. The results show the potential of our method to ensure the consistency of the improvisation regarding the global form of the tune, and how the knowledge of a corpus of chord progressions sharing the same hierarchical organisation can extend the possibilities of music generation
    corecore