32 research outputs found

    Music Information Retrieval in Live Coding: A Theoretical Framework

    Get PDF
    The work presented in this article has been partly conducted while the first author was at Georgia Tech from 2015–2017 with the support of the School of Music, the Center for Music Technology and Women in Music Tech at Georgia Tech. Another part of this research has been conducted while the first author was at Queen Mary University of London from 2017–2019 with the support of the AudioCommons project, funded by the European Commission through the Horizon 2020 programme, research and innovation grant 688382. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Music information retrieval (MIR) has a great potential in musical live coding because it can help the musician–programmer to make musical decisions based on audio content analysis and explore new sonorities by means of MIR techniques. The use of real-time MIR techniques can be computationally demanding and thus they have been rarely used in live coding; when they have been used, it has been with a focus on low-level feature extraction. This article surveys and discusses the potential of MIR applied to live coding at a higher musical level. We propose a conceptual framework of three categories: (1) audio repurposing, (2) audio rewiring, and (3) audio remixing. We explored the three categories in live performance through an application programming interface library written in SuperCollider, MIRLC. We found that it is still a technical challenge to use high-level features in real time, yet using rhythmic and tonal properties (midlevel features) in combination with text-based information (e.g., tags) helps to achieve a closer perceptual level centered on pitch and rhythm when using MIR in live coding. We discuss challenges and future directions of utilizing MIR approaches in the computer music field

    Performing Audiences: Composition Strategies for Network Music using Mobile Phones

    Get PDF
    With the development of web audio standards, it has quickly become technically easy to develop and deploy software for inviting audiences to participate in musical performances using their mobile phones. Thus, a new audience-centric musical genre has emerged, which aligns with artistic manifestations where there is an explicit inclusion of the public (e.g. participatory art, cinema or theatre). Previous research has focused on analysing this new genre from historical, social organisation and technical perspectives. This follow-up paper contributes with reflections on technical and aesthetic aspects of composing within this audience-centric approach. We propose a set of 13 composition dimensions that deal with the role of the performer, the role of the audience, the location of sound and the type of feedback, among others. From a reflective approach, four participatory pieces developed by the authors are analysed using the proposed dimensions. Finally, we discuss a set of recommendations and challenges for the composers-developers of this new and promising musical genre. This paper concludes discussing the implications of this research for the NIME community

    How to Talk of Music Technology: An Interview Analysis Study of Live Interfaces for Music Performance among Expert Women

    Get PDF
    With the aim of making women’s work in music technology more visible, the organization Women Nordic Music Technology (WoNoMute) has originated conversations with expert women in the form of seminar talks and interviews that are archived digitally. This paper analyses the first seven interviews and seminar talks with women from this online archive using thematic analysis. We explore whether and how their gender determines the shape of their tools focusing on live interfaces. From our findings, we propose to investigate alternative usage of the technical term ‘music technology’ to accommodate more diversity and fluidity into the field. This can inform the revision of the language used in education and human-computer interaction in order to be more inclusive but also to become more conscious about the creation of professional and academic environments that involve music technology

    Live Coding with Crowdsourced Sounds and A Virtual Agent Companion

    Get PDF
    open accessThis performance combines machine learning algorithms with music information retrieval techniques to retrieve crowdsourced sounds from the online database Freesound.org, which results in a sound-based music style. The use of a virtual companion complements a human live coder in her/his practice. The core themes of legibility, agency and negotiability in performance are researched through the collaboration between the human live coder, the virtual agent and the audience

    Virtual Agents in Live Coding: A Short Review

    Get PDF
    Although this special issue has been scheduled for 15 March 2021, it is still unpublished: https://econtact.ca/call.html (eContact! 21.1 — Take Back the Stage: Live coding, live audiovisual, laptop orchestra…)AI and live coding has been little explored. This article contributes with a short review of different perspectives of using virtual agents in the practice of live coding looking at past and present as well as pointing to future directions

    The Notion of Presence in a Telematic Cross-Disciplinary Program for Music, Communication and Technology

    Get PDF
    open access bookThis chapter examines how students in a two-campus, cross-disciplinary program in Music, Communication and Technology (MCT) experience the sense of presence of peer students and teachers, some physically co-localized while others are present via an audiovisual communications system. The chapter starts by briefly delineating the MCT program, the audiovisual communications system and the learning space built around it, named the Portal, and the research project SALTO which frames the current study. We then review research literature on presence relevant to this particular context and use this as a basis for the design of an online survey using a combination of Likert items and free text response. Our main findings, based on responses from the 16 students who participated in the survey, are that the mediating technologies of the Portal affect the experience of presence negatively, but that formal learning scenarios are less affected than informal scenarios that require social interaction

    Learning to Code Through Web Audio: A Team-Based Learning Approach

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.In this article, we discuss the challenges and opportunities provided by teaching programming using web audio technologies and adopting a team-based learning (TBL) approach among a mix of co-located and remote students, mostly novices in programming. The course has been designed for cross-campus teaching and teamwork, in alignment with the two-city master's programme in which it has been delivered. We present the results and findings from: (1) students' feedback; (2) software complexity metrics; (3) students' blog posts; and (4) teacher's reflections. We found that the nature of web audio as a browser-based environment, coupled with the collaborative nature of the course, were suitable for improving the students' level of confidence about their ability in programming. This approach promoted the creation of group course projects of a certain level of complexity, based on the students' interests and programming levels. We discuss the challenges of this approach, such as supporting smooth cross-campus interactions and assuring students' pre-knowledge in web technologies (HTML, CSS, JavaScript) for an optimal experience. We conclude by envisioning the scalability of this course to other distributed and remote learning scenarios in academic and professional settings. This is in line with the foreseen future scenario of cross-site interaction mediated through code

    A General Framework for Visualization of Sound Collections in Musical Interfaces

    Get PDF
    open access articleWhile audio data play an increasingly central role in computer-based music production, interaction with large sound collections in most available music creation and production environments is very often still limited to scrolling long lists of file names. This paper describes a general framework for devising interactive applications based on the content-based visualization of sound collections. The proposed framework allows for a modular combination of different techniques for sound segmentation, analysis, and dimensionality reduction, using the reduced feature space for interactive applications. We analyze several prototypes presented in the literature and describe their limitations. We propose a more general framework that can be used flexibly to devise music creation interfaces. The proposed approach includes several novel contributions with respect to previously used pipelines, such as using unsupervised feature learning, content-based sound icons, and control of the output space layout. We present an implementation of the framework using the SuperCollider computer music language, and three example prototypes demonstrating its use for data-driven music interfaces. Our results demonstrate the potential of unsupervised machine learning and visualization for creative applications in computer music

    detuning a tuning

    No full text
    detuning a tuning explores the sonic boundaries between tuning and detuning. The aural expedition consists of deconstructing audio recordings of Greg Chryssopoulos tuning a Kawai RX-6 grand piano at the School of Music, Georgia Tech, Atlanta, USA, recorded in February 2017. The album explores algorithmic music composition using SuperCollider with spectral modelling synthesis, mainly using the FluCoMa library for the latter. The recorded tuning process is listened to by self-built algorithms and used to control an abstract synthesiser. This process generates an organic evocative "décollage" of textural sounds merged with the recordings. The piano was Anna Xambó’s first musical instrument to learn, but the framework behind the classical piano lessons made it challenging to think outside the box. This album is an original contribution to other ways of seeing the piano, from observing the first and often unseen part of the process: the tuning. a0 (06:45) traces (07:12) together (02:41) residual (07:14) triads (06:30) decollage (04:22) clock (05:20) emergence (03:43) creatures (06:05) Author: Anna Xambó Mastering: Gerard Roma Artwork: Carpal Tunnel (CC BY-NC-SA 3.0) Anna Xambó SedóYou wake up in the middle of the night. In your dream, you lived along with other creatures inside a huge piano. The piano was being tuned and detuned at the same time. Strings snapped, and large pieces were breaking off, making a range of screeching, banging and scratching noises, with the occasional note. There were some voices outside, and you worried about finding a new place to live. You remember playing the piano as a kid, and wonder how it would be to take it again, then go back to sleep. detuning a tuning by Anna Xambó was released on 11 February 2023 by Carpal Tunnel
    corecore