17,932 research outputs found

    Herding cats: observing live coding in the wild

    Get PDF
    After a momentous decade of live coding activities, this paper seeks to explore the practice with the aim of situating it in the history of contemporary arts and music. The article introduces several key points of investigation in live coding research and discusses some examples of how live coding practitioners engage with these points in their system design and performances. In the light of the extremely diverse manifestations of live coding activities, the problem of defining the practice is discussed, and the question raised whether live coding will actually be necessary as an independent category

    Music Information Retrieval in Live Coding: A Theoretical Framework

    Get PDF
    The work presented in this article has been partly conducted while the first author was at Georgia Tech from 2015–2017 with the support of the School of Music, the Center for Music Technology and Women in Music Tech at Georgia Tech. Another part of this research has been conducted while the first author was at Queen Mary University of London from 2017–2019 with the support of the AudioCommons project, funded by the European Commission through the Horizon 2020 programme, research and innovation grant 688382. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Music information retrieval (MIR) has a great potential in musical live coding because it can help the musician–programmer to make musical decisions based on audio content analysis and explore new sonorities by means of MIR techniques. The use of real-time MIR techniques can be computationally demanding and thus they have been rarely used in live coding; when they have been used, it has been with a focus on low-level feature extraction. This article surveys and discusses the potential of MIR applied to live coding at a higher musical level. We propose a conceptual framework of three categories: (1) audio repurposing, (2) audio rewiring, and (3) audio remixing. We explored the three categories in live performance through an application programming interface library written in SuperCollider, MIRLC. We found that it is still a technical challenge to use high-level features in real time, yet using rhythmic and tonal properties (midlevel features) in combination with text-based information (e.g., tags) helps to achieve a closer perceptual level centered on pitch and rhythm when using MIR in live coding. We discuss challenges and future directions of utilizing MIR approaches in the computer music field

    PIWeCS: enhancing human/machine agency in an interactive composition system

    Get PDF
    This paper focuses on the infrastructure and aesthetic approach used in PIWeCS: a Public Space Interactive Web-based Composition System. The concern was to increase the sense of dialogue between human and machine agency in an interactive work by adapting Paine's (2002) notion of a conversational model of interaction as a ‘complex system’. The machine implementation of PIWeCS is achieved through integrating intelligent agent programming with MAX/MSP. Human input is through a web infrastructure. The conversation is initiated and continued by participants through arrangements and composition based on short performed samples of traditional New Zealand Maori instruments. The system allows the extension of a composition through the electroacoustic manipulation of the source material

    Straddling the intersection

    Get PDF
    Music technology straddles the intersection between art and science and presents those who choose to work within its sphere with many practical challenges as well as creative possibilities. The paper focuses on four main areas: secondary education, higher education, practice and research and finally collaboration. The paper emphasises the importance of collaboration in tackling the challenges of interdisciplinarity and in influencing future technological developments

    Cueing and composing for long distance network music collaborations.

    Get PDF
    Long distance network music collaborations beyond the ensemble performance threshold (EPT) as exposed by Schuett in 2002 [14] where playability is affected beyond a roundtrip network delay of 50ms calls for the development of cueing mechanisms that are methodical and linked to musical parameters. The cueing strategies involved in such musical interactions will depend on the type of repertoire played and the network distance (ND) between the nodes involved in the performance. This paper proposes a semi-standardized cueing framework for real time collaborations over the network with latencies of more than 50ms. The paper also explores a compositional methodology for creating network centric performances, which couldn’t occur outside of a networked situation

    Abmash: Mashing Up Legacy Web Applications by Automated Imitation of Human Actions

    Get PDF
    Many business web-based applications do not offer applications programming interfaces (APIs) to enable other applications to access their data and functions in a programmatic manner. This makes their composition difficult (for instance to synchronize data between two applications). To address this challenge, this paper presents Abmash, an approach to facilitate the integration of such legacy web applications by automatically imitating human interactions with them. By automatically interacting with the graphical user interface (GUI) of web applications, the system supports all forms of integrations including bi-directional interactions and is able to interact with AJAX-based applications. Furthermore, the integration programs are easy to write since they deal with end-user, visual user-interface elements. The integration code is simple enough to be called a "mashup".Comment: Software: Practice and Experience (2013)

    Screen-based musical instruments as semiotic machines

    Get PDF
    The ixi software project started in 2000 with the intention to explore new interactive patterns and virtual interfaces in computer music software. The aim of this paper is not to describe these programs, as they have been described elsewhere, but rather explicate the theoretical background that underlies the design of these screen-based instruments. After an analysis of the similarities and differences in the design of acoustic and screen-based instruments, the paper describes how the creation of an interface is essentially the creation of a semiotic system that affects and influences the musician and the composer. Finally the terminology of this semiotics is explained as an interaction model

    Metamorphic Domain-Specific Languages: A Journey Into the Shapes of a Language

    Get PDF
    External or internal domain-specific languages (DSLs) or (fluent) APIs? Whoever you are -- a developer or a user of a DSL -- you usually have to choose your side; you should not! What about metamorphic DSLs that change their shape according to your needs? We report on our 4-years journey of providing the "right" support (in the domain of feature modeling), leading us to develop an external DSL, different shapes of an internal API, and maintain all these languages. A key insight is that there is no one-size-fits-all solution or no clear superiority of a solution compared to another. On the contrary, we found that it does make sense to continue the maintenance of an external and internal DSL. The vision that we foresee for the future of software languages is their ability to be self-adaptable to the most appropriate shape (including the corresponding integrated development environment) according to a particular usage or task. We call metamorphic DSL such a language, able to change from one shape to another shape
    • 

    corecore