175 research outputs found

    Music Information Retrieval in Live Coding: A Theoretical Framework

    Get PDF
    The work presented in this article has been partly conducted while the first author was at Georgia Tech from 2015–2017 with the support of the School of Music, the Center for Music Technology and Women in Music Tech at Georgia Tech. Another part of this research has been conducted while the first author was at Queen Mary University of London from 2017–2019 with the support of the AudioCommons project, funded by the European Commission through the Horizon 2020 programme, research and innovation grant 688382. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Music information retrieval (MIR) has a great potential in musical live coding because it can help the musician–programmer to make musical decisions based on audio content analysis and explore new sonorities by means of MIR techniques. The use of real-time MIR techniques can be computationally demanding and thus they have been rarely used in live coding; when they have been used, it has been with a focus on low-level feature extraction. This article surveys and discusses the potential of MIR applied to live coding at a higher musical level. We propose a conceptual framework of three categories: (1) audio repurposing, (2) audio rewiring, and (3) audio remixing. We explored the three categories in live performance through an application programming interface library written in SuperCollider, MIRLC. We found that it is still a technical challenge to use high-level features in real time, yet using rhythmic and tonal properties (midlevel features) in combination with text-based information (e.g., tags) helps to achieve a closer perceptual level centered on pitch and rhythm when using MIR in live coding. We discuss challenges and future directions of utilizing MIR approaches in the computer music field

    The creative process behind Dialogismos I: theoretical and technical considerations

    Get PDF
    This paper examines the aesthetic dimension and the technical realization of Dialogismos I, a piece for saxophone alto and electronics by the composer Nuno Peixoto de Pinho. The conceptual basis of the work relies on the notion of ‘intertextuality’ coined by the Bulgarian-French philosopher and literary critic Julia Kristeva, which was somehow transposed to the music domain by J. Peter Burkholder under the concept ‘musical borrowing’. The compositional problems raised by applying an intertextual musical thinking as a key driver of the composition were solved using two different approaches. The first approach was the manual selection of elements from several music works with different granularities to devise the overall structure of the work and to create the saxophone score. The second approach was applied to the realization of the electronic part and relied on concatenative sound synthesis as an algorithmic computer assisted composition method and a real-time synthesis technique.info:eu-repo/semantics/publishedVersio

    Algorithmic Compositional Methods and their Role in Genesis: A Multi-Functional Real-Time Computer Music System

    Get PDF
    Algorithmic procedures have been applied in computer music systems to generate compositional products using conventional musical formalism, extensions of such musical formalism and extra-musical disciplines such as mathematical models. This research investigates the applicability of such algorithmic methodologies for real-time musical composition, culminating in Genesis, a multi-functional real-time computer music system written for Mac OS X in the SuperCollider object-oriented programming language, and contained in the accompanying DVD. Through an extensive graphical user interface, Genesis offers musicians the opportunity to explore the application of the sonic features of real-time sound-objects to designated generative processes via different models of interaction such as unsupervised musical composition by Genesis and networked control of external Genesis instances. As a result of the applied interactive, generative and analytical methods, Genesis forms a unique compositional process, with a compositional product that reflects the character of its interactions between the sonic features of real-time sound-objects and its selected algorithmic procedures. Within this thesis, the technologies involved in algorithmic methodologies used for compositional processes, and the concepts that define their constructs are described, with consequent detailing of their selection and application in Genesis, with audio examples of algorithmic compositional methods demonstrated on the accompanying DVD. To demonstrate the real-time compositional abilities of Genesis, free explorations with instrumentalists, along with studio recordings of the compositional processes available in Genesis are presented in audiovisual examples contained in the accompanying DVD. The evaluation of the Genesis system’s capability to form a real-time compositional process, thereby maintaining real-time interaction between the sonic features of real-time sound objects and its selected algorithmic compositional methods, focuses on existing evaluation techniques founded in HCI and the qualitative issues such evaluation methods present. In terms of the compositional products generated by Genesis, the challenges in quantifying and qualifying its compositional outputs are identified, demonstrating the intricacies of assessing generative methods of compositional processes, and their impact on a resulting compositional product. The thesis concludes by considering further advances and applications of Genesis, and inviting further dissemination of the Genesis system and promotion of research into evaluative methods of generative techniques, with the hope that this may provide additional insight into the relative success of products generated by real-time algorithmic compositional processes

    Towards user-friendly audio creation

    Full text link

    Analysis on Using Synthesized Singing Techniques in Assistive Interfaces for Visually Impaired to Study Music

    Get PDF
    Tactile and auditory senses are the basic types of methods that visually impaired people sense the world. Their interaction with assistive technologies also focuses mainly on tactile and auditory interfaces. This research paper discuss about the validity of using most appropriate singing synthesizing techniques as a mediator in assistive technologies specifically built to address their music learning needs engaged with music scores and lyrics. Music scores with notations and lyrics are considered as the main mediators in musical communication channel which lies between a composer and a performer. Visually impaired music lovers have less opportunity to access this main mediator since most of them are in visual format. If we consider a music score, the vocal performer’s melody is married to all the pleasant sound producible in the form of singing. Singing best fits for a format in temporal domain compared to a tactile format in spatial domain. Therefore, conversion of existing visual format to a singing output will be the most appropriate nonlossy transition as proved by the initial research on adaptive music score trainer for visually impaired [1]. In order to extend the paths of this initial research, this study seek on existing singing synthesizing techniques and researches on auditory interfaces

    Composing Music for Acoustic Instruments and Electronics Mediated Through the Application of Microsound

    Get PDF
    This project seeks to extend, through a portfolio of compositions, the use of microsound to mixed works incorporating acoustic instrument and electronics. Issues relating to the notation of microsound when used with acoustic instruments are explored and the adoption of a clear and intuitive system of graphical notation is proposed. The design of the performance environment for the electroacoustic part is discussed and different models for the control of the electronics are considered. Issues relating to structure and form when applied to compositions that mix note-based material with texture-based material are also considered. A framework based on a pure sound/noise continuum, used in conjunction with a hierarchy of gestural archetypes, is adopted as a possible solution to the challenges of structuring mixed compositions. Gestural and textural relationships between different parts of the compositions are also explored and the use of extended instrumental techniques to create continua between the acoustic and the electroacoustic is adopted. The role of aleatoric techniques and improvisation in both the acoustic and the electroacoustic parts are explored through adoption of an interactive performance environment incorporating a pitch-tracking algorithm. Finally, the advantages and disadvantages of real time recording and processing of the electronic part when compared with live processing of pre-existing sound-files are discussed
    corecore