387 research outputs found

    Distributed Networks of Listening and Sounding: 20 Years of Telematic Musicking

    Get PDF
    This paper traces a twenty-year arc of my performance and compositional practice in the medium of telematic music, focusing on a distinct approach to fostering interdependence and emergence through the integration of listening strategies, electroacoustic improvisation, pre-composed structures, blended real/virtual acoustics, networked mutual-influence, shared signal transformations, gesture-concepts and machine agencies. Communities of collaboration and exchange over this time period are discussed, which span both pre- and post-pandemic approaches to the medium that range from metaphors of immersion and dispersion to diffraction

    Deus Ex Machina, December 12, 2010

    Full text link
    This is the concert program of the Deus Ex Machina performance on Sunday, December 12, 2010 at 8:00 p.m., at the Marshall Room, 855 Commonwealth Avenue, Boston, Massachusetts. Works performed were Slimecake (class edit) by Maxwell Chamberlin, Whale Song by Katrina Peters, Blub by Brian Gaffney, Monster Chase by Christopher James, 269090 by Richard Gruenler, X.O. by Aaron Kirschner, three stones by Heather Stebbins, Dilates by Lesly Hinger, Alone Time by Joshua Liu, Floating Dream by Yi-Chun Hung, Treetops by Patrick Manian, Non-Holonomic Perturbations by Gabe Venegas, and Peakin' and Freakin' by Greg Friedlander. Digitization for Boston University Concert Programs was supported by the Boston University Center for the Humanities Library Endowed Fund

    Physical controller development for real time 3D audio spatialization

    Get PDF
    The development of space as a musical parameter can be traced back to Baroque and Classical periods where performances would make use of unusual places inside churches or concert halls to augment the dramatic impact of some works. Even so, this wasn’t enough for composers that followed, as they started to think about space as a real parameter for musical composition, just as important as for example pitch and timbre. Direction and trajectory were then in the minds of the XXth century composers, and from then on, they were aided by innovative technology and imaginative musical notation formats to take bigger steps towards present day spatialization in music. This dissertation focuses on what happens next, when computers are a common tool for musicians and composers and spatialization is surrounded by technology and software. Different algorithms and spatialization software allow for improved techniques of manipulating a virtual sound source ‘s behavior in space. This manipulation can either be programed or drawn with the aid of a three-dimensional control mechanism. It was then considered that a qualitative methodological approach would help understand some of the choices made until now surrounding this three-dimensional control of sound sources. With that in consideration, this dissertation describes the process and results of developing a three-dimensional control interface that features physical responsiveness to the user’s movement. The study is then divided and presented in three main chapters that offer an historical view and bibliographical research; an overview of the 3d controller design/construction process; and test description and result analyses on some initial behavior of the controller. The final result is a tested controller that uncovers requirements for future work to transform it into a spatialization instrument.O desenvolvimento do espaço como parâmetro musical remonta aos períodos Barroco e Clássico durante os quais performances musicais utilizavam diferentes posições dentro de igrejas ou salas de concerto para aumentar o impacto dramático de algumas obras. Mesmo assim, isso não foi suficiente para os futuros compositores que começaram a pensar sobre o espaço como um parâmetro composicional, tão importante como, por exemplo, o tom e o timbre. Nas mentes dos compositores do séc. XX estavam a direção e a trajetória, e a partir de então foram auxiliados pela tecnologia inovadora e formatos de notação musical imaginativos para caminharem em direção à atual espacialização na música. Esta dissertação descreve o que acontece a seguir, quando os computadores são uma ferramenta comum para músicos e compositores e a espacialização é cercada por tecnologia e software. Diferentes algoritmos e software de espacialização permitem técnicas aprimoradas de manipulação do comportamento de uma fonte de som virtual no espaço. Esta manipulação pode ser programada ou desenhada com o auxílio de um mecanismo de controle tridimensional. Considerou-se então que uma abordagem metodológica qualitativa ajudaria a entender algumas das escolhas feitas até agora, sobre o controle tridimensional de fontes de som. Paralelamente à pesquisa do estado da arte, existiram processos que compreenderam a criação de uma interface de controle tridimensional que também ofereceria capacidade de resposta física ao movimento do usuário. O estudo é então dividido e apresentado em três capítulos principais, que oferecem: uma visão histórica e pesquisa bibliográfica sobre o estado de arte da espacialização sonora num contexto musical; uma visão geral do processo de projeto/construção do controlador 3d; e uma descrição dos testse e a análise dos seus resultados sobre algum comportamento inicial do controlador. O resultado final foi um controlador testado que revela os próximos passos a serem tomados para o transformar num instrumento de espacialização

    Music in Virtual Space: Theories and Techniques for Sound Spatialization and Virtual Reality-Based Stage Performance

    Get PDF
    This research explores virtual reality as a medium for live concert performance. I have realized compositions in which the individual performing on stage uses a VR head-mounted display complemented by other performance controllers to explore a composed virtual space. Movements and objects within the space are used to influence and control sound spatialization and diffusion, musical form, and sonic content. Audience members observe this in real-time, watching the performer\u27s journey through the virtual space on a screen while listening to spatialized audio on loudspeakers variable in number and position. The major artistic challenge I will explore through this activity is the relationship between virtual space and musical form. I will also explore and document the technical challenges of this activity, resulting in a shareable software tool called the Multi-source Ambisonic Spatialization Interface (MASI), which is useful in creating a bridge between VR technologies and associated software, ambisonic spatialization techniques, sound synthesis, and audio playback and effects, and establishes a unique workflow for working with sound in virtual space

    Reducing and removing barriers to spatial audio : applications of capital as a critical framework to promote inclusion in spatial audio : a thesis submitted to Massey University in partial fulfilment of the requirements for the degree of Doctorate of Philosophy in Music at Massey University, Wellington, New Zealand

    Get PDF
    The research within this thesis aims to address the question of whether barriers of capital to the field of spatial audio can be reduced or removed. Spatial audio is the musical utilization of space, where spatialization is the salient feature of the musical work. As a field, it primarily exists within academic and art institutions. Because of this, there are numerous barriers that prohibit people from engaging with the field. These barriers include significant technical requirements, the need for education, the expense of large spatial audio systems, amongst others. These barriers mean that those who are excluded have little to no pathway to engage with the field. This thesis explores the barriers in spatial audio through the lens of capital. Viewed as one’s level of resource, a lack of economic, social, symbolic, cultural, and physical capital can exclude many from engaging with spatial audio. The research within this thesis identifies barriers of capital that exist within the field through qualitative and quantitative survey analysis as well as literature review. The identified barriers are then addressed through practice-led and practice-based research with the creation of new spatial audio works and compositional strategies, alongside user surveys to ascertain the efficacy of the research

    Musique Concrète Choir: An Interactive Performance Environment for Any Number of People

    Get PDF
    Presented at the 2nd Web Audio Conference (WAC), April 4-6, 2016, Atlanta, Georgia.Using the Web Audio API, a roomful of smartphones becomes a platform on which to create novel musical experiences. As seen at WAC 2015, composers and performers are using this platform to create clouds of sound distributed in space through dozens of loudspeakers. This new platform offers an opportunity to reinvent the roles of audience, composer, and performer. It also presents new technology challenges; at WAC 2015 some servers crashed under load. We also saw difficulties creating and joining private WiFi networks. In this piece, building on the lessons of WAC 2015, we load all our sound resources onto each phone at the beginning of the piece from a stable, well-known web host. Where possible, we use the new Service Worker API to cache our resources locally on the phone. We also replace real-time streaming control of roomful of phones with real-time engagement of the audience members as performers

    Timbre hybridization processes and strategies. A Portfolio of Compositions

    Get PDF
    This document describes the processes and development of my compositional work, particularly concerning the introduction of modifications of timbral qualities, including combinations, and hybridization procedures. It describes compositional ethodologies, developed within a technological environment, and the interrelation between theoretical thought and computational approach. The following chapters present time, frequency, and timbre as materials of investigation, analysis, and re-composition, through real time electroacoustic strategies and treatments. The preparation and design of specific software, through the utilization of programming language Max/MSP Jitter, will illustrate the computational approach to composing, its inner correspondence with the theoretical approach, and interconnections with preparation and performing activity. Procedures progressively applied to the portfolio of compositions are presented in the final chapters of the document. The portfolio consists of six works completed during the last six years, for instruments and real time electronic treatment, presented as a CD with the complete recordings of three compositions, four scores, and a DVD, containing video recording of two works. The last three compositions presented are also part of a cycle of works –still in progress- dedicated to the whole instrumental spectrum, in which the voice represents the physical-musical material of each work

    Music Information Retrieval in Live Coding: A Theoretical Framework

    Get PDF
    The work presented in this article has been partly conducted while the first author was at Georgia Tech from 2015–2017 with the support of the School of Music, the Center for Music Technology and Women in Music Tech at Georgia Tech. Another part of this research has been conducted while the first author was at Queen Mary University of London from 2017–2019 with the support of the AudioCommons project, funded by the European Commission through the Horizon 2020 programme, research and innovation grant 688382. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Music information retrieval (MIR) has a great potential in musical live coding because it can help the musician–programmer to make musical decisions based on audio content analysis and explore new sonorities by means of MIR techniques. The use of real-time MIR techniques can be computationally demanding and thus they have been rarely used in live coding; when they have been used, it has been with a focus on low-level feature extraction. This article surveys and discusses the potential of MIR applied to live coding at a higher musical level. We propose a conceptual framework of three categories: (1) audio repurposing, (2) audio rewiring, and (3) audio remixing. We explored the three categories in live performance through an application programming interface library written in SuperCollider, MIRLC. We found that it is still a technical challenge to use high-level features in real time, yet using rhythmic and tonal properties (midlevel features) in combination with text-based information (e.g., tags) helps to achieve a closer perceptual level centered on pitch and rhythm when using MIR in live coding. We discuss challenges and future directions of utilizing MIR approaches in the computer music field
    • …
    corecore