135 research outputs found

    Music Information Retrieval in Live Coding: A Theoretical Framework

    Get PDF
    The work presented in this article has been partly conducted while the first author was at Georgia Tech from 2015–2017 with the support of the School of Music, the Center for Music Technology and Women in Music Tech at Georgia Tech. Another part of this research has been conducted while the first author was at Queen Mary University of London from 2017–2019 with the support of the AudioCommons project, funded by the European Commission through the Horizon 2020 programme, research and innovation grant 688382. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Music information retrieval (MIR) has a great potential in musical live coding because it can help the musician–programmer to make musical decisions based on audio content analysis and explore new sonorities by means of MIR techniques. The use of real-time MIR techniques can be computationally demanding and thus they have been rarely used in live coding; when they have been used, it has been with a focus on low-level feature extraction. This article surveys and discusses the potential of MIR applied to live coding at a higher musical level. We propose a conceptual framework of three categories: (1) audio repurposing, (2) audio rewiring, and (3) audio remixing. We explored the three categories in live performance through an application programming interface library written in SuperCollider, MIRLC. We found that it is still a technical challenge to use high-level features in real time, yet using rhythmic and tonal properties (midlevel features) in combination with text-based information (e.g., tags) helps to achieve a closer perceptual level centered on pitch and rhythm when using MIR in live coding. We discuss challenges and future directions of utilizing MIR approaches in the computer music field

    Herding cats: observing live coding in the wild

    Get PDF
    After a momentous decade of live coding activities, this paper seeks to explore the practice with the aim of situating it in the history of contemporary arts and music. The article introduces several key points of investigation in live coding research and discusses some examples of how live coding practitioners engage with these points in their system design and performances. In the light of the extremely diverse manifestations of live coding activities, the problem of defining the practice is discussed, and the question raised whether live coding will actually be necessary as an independent category

    Code scores in live coding practice

    Get PDF
    This paper explores live coding environments in the con- text of notational systems. The improvisational practice of live coding as combining both composition and per- formance is introduced and selected systems are dis- cussed. The author’s Threnoscope system is described, but this is a system that enables the performer to work with both descriptive and prescriptive scores that can be run and altered in an improvisational performance

    Live Coding Language Design: PHAD

    Get PDF
    Phad is a live coding environment built for web browsers. Phad strives for minimal syntax, with the purpose of achieving both readability and simplicity targeted to users with no programming or music experience. It is built around the modification of notes and instruments, and the combination of both. The Phad environment was built with the user in mind, offering tutorials, exercises, a playground, and collaborative performance. Users are able to login with a Google Account and save code to our database, making it easy to come back to previous work, or to see what other users have created. The language and system design of Phad was inspired by previous live-coding environments, and was improved iteratively with external feedback as well as our experience at the Algorave

    Live coding the code: an environment for 'meta' live code performance

    Get PDF
    Live coding languages operate by constructing and reconstructing a program designed to create sound. These languages often have domain-specific affordances for sequencing changes over time, commonly described as patterns or sequences. Rarely are these affordances completely generic. Instead, live coders work within the constraints of their chosen language, sequencing parameters the language allows with timing that the language allows. This paper presents a novel live coding environment for the existing language lissajous that allows sequences of text input to be recorded, replayed, and manipulated just like any other musical parameter. Although initially written for the lissajous language, the presented environment is able to interface with other browser-based live coding languages such as Gibber. This paper outlines our motivations behind the development of the presented environment before discussing its creative affordances and technical implementation, concluding with a discussion on a number of evaluation metrics for such an environment and how the work can be extended in the future

    Prototyping of Ubiquitous Music Ecosystems

    Get PDF
    This paper focuses the prototyping stage of the design cycle of ubiquitous music (ubimus) ecosystems. We present three case studies of prototype deployments for creative musical activities. The first case exemplifies a ubimus system for synchronous musical interaction using a hybrid Java-JavaScript development platform, mow3s-ecolab. The second case study makes use of the HTML5 Web Audio library to implement a loop-based sequencer. The third prototype - an HTML-controlled sine-wave oscillator - provides an example of using the Chromium open-source sand-boxing technology Portable Native Client (PNaCl) platform for audio programming on the web. This new approach involved porting the Csound language and audio engine to the PNaCl web technology. The Csound PNaCl environment provides programming tools for ubiquitous audio applications that go beyond the HTML5 Web Audio framework. The limitations and advantages of the three approaches proposed - the hybrid Java/- JavaScript environment, the HTML5 audio library and the Csound PNaCl infrastructure - are discussed in the context of rapid prototyping of ubimus ecosystems

    Live coding machine learning and machine listening: a survey on the design of languages and environments for live coding

    Get PDF
    The MIMIC (Musically Intelligent Machines Interacting Creatively) project explores how the techniques of machine learning and machine listening can be communicated and implemented in simple terms for composers, instrument makers and performers. The potential for machine learning to support musical composition and performance is high, and with novel techniques in machine listening, we see emerging a technology that can shift from being instrumental to conversational and collaborative. By leveraging the internet as a live software ecosystem, the MIMIC project explores how such technology can best reach artists, and live up to its collaborative potential to fundamentally change creative practice in the field. The project involves creating a high level language that can be used for live coding, creative coding and quick prototyping. Implementing a language that interfaces with technically complex problems such as the design of machine learning neural networks or the temporal and spectral algorithms applied in machine listening is not a simple task, but we can build upon decades of research and practice in programming language design (Ko 2016), and computer music language design in particular, as well as a plethora of inventive new approaches in the design of live coding systems for music (Reina et al. 2019). The language and user interface design will build on recent research in creative coding and interactive machine learning, exemplified by the Rapid Mix project (Bernardo et. al., 2016, Zbyszynski et. al., 2017). Machine learning continues to be at the forefront of new innovations in computer music, (e.g. new sound synthesis techniques in NSynth (Engel et. al. 2017) and WaveNet (van den Oord, 2016)); the language will seek to integrate models based around these new techniques into live coding performance, and also explore the efficacy of live coding as an approach to training and exploiting these systems for analysing and generating sound. Existing live coding systems and languages are often reported on, describing clever solutions as well as weaknesses, as given, for example, in accounts of the development of Tidal (McLean, 2014), Extramuros (Ogborn et. al, 2015) and Gibber (Roberts and Kuchera-Morin, 2012). Researchers are typically reflective and openly critical of their own systems when analysing them and often report on its design with wider implications (Aaron 2011; Sorensen 2018). However, they rarely speculate freely and uninhibitedly about possible solutions or alternative paths taken; the focus is typically on the system described. Before defining the design of our own system, we were therefore interested in opening up a channel where we could learn from other practitioners in language design, machine learning and machine listening. We created a survey that we sent out to relevant communities of practice - such as live coding, machine learning, machine listening, creative coding, deep learning - and asked open questions about how they might imagine a future system implemented, given the knowledge we have today. Below we report on the questionnaire and its findings

    The PENELOPE Project: A Case Study in Computational Thinking

    Get PDF
    Weaving is presented in this paper with relation to the four key categories of computational thinking: decomposition, pattern recognition, abstraction, and algorithms. The role of weaving for the development of theoretical concepts is underestimated, because we perceive weaving as a minor craft with little technological challenge and impact. Where technological progress is measured in terms of gaining time from dull and tedious repetitive tasks, weaving appears to be the archetype of such repetitive work and thus as the more technological the faster it goes. We address this framing of perception by presenting ancient weaving as the earliest binary and digital technology. The PENELOPE project (ERC CoGrant no. 682711) aims to develop a theory of weaving as part of a deep history and epistemology for digital technology. In explicating the mathematical and computing principles invoked in weaving, we furthermore explore weaving as a kind of education that has the potential to engage tacit knowledge that is necessary to make technical and aesthetic choices in coding. By this, we argue for an alternative history of digital art
    corecore