1,472 research outputs found

    From Autonomous to Performative Control of Timbral Spatialisation

    Get PDF
    Timbral spatialisation is one such process that requires the independent control of potentially thousands of parameters (Torchia, et al., 2003). Current research on controlling timbral spatialisation has focussed either on automated generative systems, or suggested that to design trajectories in software is to write every movement line by line (Normandeau, 2009). This research proposes that Wave Terrain Synthesis may be used as an effective bridging control structure for timbral spatialisation, enabling the performative control of large numbers of parameter sets associated with software. This methodology also allows for compact interactive mapping possibilities for a physical controller, and may also be effectively mapped gesturall

    Instruments for Spatial Sound Control in Real Time Music Performances. A Review

    Get PDF
    The systematic arrangement of sound in space is widely considered as one important compositional design category of Western art music and acoustic media art in the 20th century. A lot of attention has been paid to the artistic concepts of sound in space and its reproduction through loudspeaker systems. Much less attention has been attracted by live-interactive practices and tools for spatialisation as performance practice. As a contribution to this topic, the current study has conducted an inventory of controllers for the real time spatialisation of sound as part of musical performances, and classified them both along different interface paradigms and according to their scope of spatial control. By means of a literature study, we were able to identify 31 different spatialisation interfaces presented to the public in context of artistic performances or at relevant conferences on the subject. Considering that only a small proportion of these interfaces combines spatialisation and sound production, it seems that in most cases the projection of sound in space is not delegated to a musical performer but regarded as a compositional problem or as a separate performative dimension. With the exception of the mixing desk and its fader board paradigm as used for the performance of acousmatic music with loudspeaker orchestras, all devices are individual design solutions developed for a specific artistic context. We conclude that, if controllers for sound spatialisation were supposed to be perceived as musical instruments in a narrow sense, meeting certain aspects of instrumentality, immediacy, liveness, and learnability, new design strategies would be required

    Exploring Pitch and Timbre through 3D Spaces: Embodied Models in Virtual Reality as a Basis for Performance Systems Design

    Get PDF
    Our paper builds on an ongoing collaboration between theorists and practitioners within the computer music community, with a specific focus on three-dimensional environments as an incubator for performance systems design. In particular, we are concerned with how to provide accessible means of controlling spatialization and timbral shaping in an integrated manner through the collection of performance data from various modalities from an electric guitar with a multichannel audio output. This paper will focus specifically on the combination of pitch data treated within tonal models and the detection of physical performance gestures using timbral feature extraction algorithms. We discuss how these tracked gestures may be connected to concepts and dynamic relationships from embodied cognition, expanding on performative models for pitch and timbre spaces. Finally, we explore how these ideas support connections between sonic, formal and performative dimensions. This includes instrumental technique detection scenes and mapping strategies aimed at bridging music performance gestures across physical and conceptual planes

    An Introduction to Interactive Music for Percussion and Computers

    Get PDF
    Composers began combining acoustic performers with electronically produced sounds in the early twentieth century. As computer-processing power increased the potential for significant musical communication was developed. Despite the body of research concerning electronic music, performing a composition with a computer partner remains intimidating for performers. The purpose of this paper is to provide an introductory method for interacting with a computer. This document will first follow the parallel evolution of percussion and electronics in order to reveal how each medium was influenced by the other. The following section will define interaction and explain how this is applied to musical communication between humans and computers. The next section introduces specific techniques used to cultivate human-computer interaction. The roles of performer, instrument, composer and conductor will then be defined as they apply to the human performer and the computer. If performers are aware of these roles they will develop richer communication that can enhance the performer's and audience member's recognition of human-computer interaction. In the final section, works for percussion and computer will be analyzed to reveal varying levels of interaction and the shifting roles of the performer. Three compositions will illustrate this point, 120bpm from neither Anvil nor Pulley by Dan Trueman, It's Like the Nothing Never Was by Von Hansen, and Music for Snare Drum and Computer by Cort Lippe. These three pieces develop a continuum of increasing interaction, moving from interaction within a fully defined score, to improvisation with digital synthesis, to the manipulation of computerized compositional algorithms using performer input. The unique ways each composer creates interaction will expose the vast possibilities for performing with interactive music systems

    Electrifying Opera, Amplifying Agency: Designing a performer-controlled interactive audio system for opera singers

    Get PDF
    This artistic research project examines the artistic, technical, and pedagogical challenges of developing a performer-controlled interactive technology for real-time vocal processing of the operatic voice. As a classically trained singer-composer, I have explored ways to merge the compositional aspects of transforming electronic sound with the performative aspects of embodied singing. I set out to design, develop, and test a prototype for an interactive vocal processing system using sampling and audio processing methods. The aim was to foreground and accommodate an unamplified operatic voice interacting with the room's acoustics and the extended disembodied voices of the same performer. The iterative prototyping explored the performer's relationship to the acoustic space, the relationship between the embodied acoustic voice and disembodied processed voice(s), and the relationship to memory and time. One of the core challenges was to design a system that would accommodate mobility and allow interaction based on auditory and haptic cues rather than visual. In other words, a system allowing the singer to control their sonic output without standing behind a laptop. I wished to highlight and amplify the performer's agency with a system that would enable nuanced and variable vocal processing, be robust, teachable, and suitable for use in various settings: solo performances, various types and sizes of ensembles, and opera. This entailed mediating different needs, training, and working methods of both electronic music and opera practitioners. One key finding was that even simple audio processing could achieve complex musical results. The audio processes used were primarily combinations of feedback and delay lines. However, performers could get complex musical results quickly through continuous gestural control and the ability to route signals to four channels. This complexity sometimes led to surprising results, eliciting improvisatory responses also from singers without musical improvisation experience. The project has resulted in numerous vocal solo, chamber, and operatic performances in Norway, the Netherlands, Belgium, and the United States. The research contributes to developing emerging technologies for live electronic vocal processing in opera, developing the improvisational performance skills needed to engage with those technologies, and exploring alternatives for sound diffusion conducive to working with unamplified operatic voices. Links: Exposition and documentation of PhD research in Research Catalogue: Electrifying Opera, Amplifying Agency. Artistic results. Reflection and Public Presentations (PhD) (2023): https://www.researchcatalogue.net/profile/show-exposition?exposition=2222429 Home/Reflections: https://www.researchcatalogue.net/view/2222429/2222460 Mapping & Prototyping: https://www.researchcatalogue.net/view/2222429/2247120 Space & Speakers: https://www.researchcatalogue.net/view/2222429/2222430 Presentations: https://www.researchcatalogue.net/view/2222429/2247155 Artistic Results: https://www.researchcatalogue.net/view/2222429/222248

    Audio-Based Visualization of Expressive Body Movements in Music Performance: An Evaluation of Methodology in Three Electroacoustic Compositions

    Get PDF
    An increase in collaboration amongst visual artists, performance artists, musicians, and programmers has given rise to the exploration of multimedia performance arts. A methodology for audio-based visualization has been created that integrates the information of sound with the visualization of physical expressions, with the goal of magnifying the expressiveness of the performance. The emphasis is placed on exalting the music by using the audio to affect and enhance the video processing, while the video does not affect the audio at all. In this sense the music is considered to be autonomous of the video. The audio-based visualization can provide the audience with a deeper appreciation of the music. Unique implementations of the methodology have been created for three compositions. A qualitative analysis of each implementation is employed to evaluate both the technological and aesthetic merits for each composition

    Distributed Networks of Listening and Sounding: 20 Years of Telematic Musicking

    Get PDF
    This paper traces a twenty-year arc of my performance and compositional practice in the medium of telematic music, focusing on a distinct approach to fostering interdependence and emergence through the integration of listening strategies, electroacoustic improvisation, pre-composed structures, blended real/virtual acoustics, networked mutual-influence, shared signal transformations, gesture-concepts and machine agencies. Communities of collaboration and exchange over this time period are discussed, which span both pre- and post-pandemic approaches to the medium that range from metaphors of immersion and dispersion to diffraction
    • …
    corecore