16 research outputs found

    Autopia: An AI Collaborator for Live Coding Music Performances

    Get PDF
    Live coding is “the activity of writing (parts of) a program while it runs” (Ward et al., 2004). One significant application of live coding is in algorithmic music, where the performer modifies the code generating the music in a live context. Utopia is a software tool for collaborative live coding performances, allowing several performers (each with their own laptop producing its own sound) to communicate and share code during a performance. We have made an AI bot, Autopia, which can participate in such performances, communicating with human performers through Utopia. This form of human-AI collaboration allows us to explore the implications of computational creativity from the perspective of live coding

    Musical Deep Learning: Stylistic Melodic Generation with Complexity Based Similarity

    Get PDF
    The wide-ranging impact of deep learning models implies significant application in music analysis, retrieval, and generation. Initial findings from musical application of a conditional restricted Boltzmann machine (CRBM) show promise towards informing creative computation. Taking advantage of the CRBM’s ability to model temporal dependencies full reconstructions of pieces are achievable given a few starting seed notes. The generation of new material using figuration from the training corpus requires restrictions on the size and memory space of the CRBM, forcing associative rather than perfect recall. Musical analysis and information complexity measures show the musical encoding to be the primary determinant of the nature of the generated results

    Designing Computationally Creative Musical Performance Systems

    Get PDF
    This is work in progress where we outline a design process for a computationally creative musical performance system using the Creative Systems Framework (CSF). The proposed system is intended to produce virtuosic interpretations, and subsequent synthesized renderings of these interpretations with a physical model of a bass guitar, using case-based reasoning and reflection. We introduce our interpretations of virtuosity and musical performance, outline the suitability of case-based reasoning in computationally creative systems and introduce notions of computational creativity and the CSF. We design our system by formalising the components of the CSF and briefly outline a potential implementation. In doing so, we demonstrate how the CSF can be used as a tool to aid in designing computationally creative musical performance systems

    Aspects of Self-awareness: An Anatomy of Metacreative Systems

    Get PDF
    We formulate a model of computational metacreativity. It consists of various aspects of creative self-awareness that potentially contribute, in various combinations, to the metacreative capabilities of a creative system. Our model is inspired by a psychological view of metacreativity promoting the awareness of one's thoughts during the creative process, and draws from the field of self-adaptive software systems to explicate different viewpoints of metacreativity in creative systems. The model is designed to help in analyzing metacreative capabilities of creative systems, and to guide the development of creative systems to a more autonomous and adaptive direction.Peer reviewe

    Autopia: An AI Collaborator for Live Coding Music Performances

    Get PDF
    Live coding is “the activity of writing (parts of) a program while it runs” (Ward et al., 2004). One significant application of live coding is in algorithmic music, where the performer modifies the code generating the music in a live context. Utopia is a software tool for collaborative live coding performances, allowing several performers (each with their own laptop producing its own sound) to communicate and share code during a performance. We have made an AI bot, Autopia, which can participate in such performances, communicating with human performers through Utopia. This form of human-AI collaboration allows us to explore the implications of computational creativity from the perspective of live coding

    On creative practice and generative AI:Co-shaping the development of emerging artistic technologies

    Get PDF
    In recent years, advances in artificial intelligence (AI) and machine learning have given rise to powerful new tools and methods for creative practitioners. 2022–2023 in particular saw an explosion in generative AI tools, models and use cases. Noting the long history of critical arts engaging with AI, this chapter considers both the application of generative AI in the creative indus-tries, and ways in which artists co-shape the development of these emerging technologies. After reviewing the landscape of generative AI in visual arts, music and games, we propose four areas of critical interest for the future co-shaping of generative AI and creative practice in the areas of communi-ties and open source, deeper engagement with AI, beyond the human and cultural feedbacks

    Virtual Agents in Live Coding: A Short Review

    Get PDF
    Although this special issue has been scheduled for 15 March 2021, it is still unpublished: https://econtact.ca/call.html (eContact! 21.1 — Take Back the Stage: Live coding, live audiovisual, laptop orchestra
)AI and live coding has been little explored. This article contributes with a short review of different perspectives of using virtual agents in the practice of live coding looking at past and present as well as pointing to future directions

    Audio-based Musical Artificial Intelligence and Audio-Reactive Visual Agents in Revive

    Get PDF
    Revive is an live audio-visual performance project that brings together a musical artificial intelligence architecture, human electronic musicians, and audio-reactive visual agents in a complex multimedia environment of a dome view with multichannel 3D audio. The context of the project is live audio-visual performance of experimental electronic music through structured improvisation. Revive applies structured improvisation using cues and automatized parameter changes within these cues. Performers have different roles within the musical structures initiated by the cues. These roles change as the performance temporally evolves. Sonic actions of performers are further emphasized by audio-reactive visual agents. The behaviours and 1contents of sonic and visual agents change as the performance unfolds
    corecore