2,236 research outputs found

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    Electrifying Opera, Amplifying Agency: Designing a performer-controlled interactive audio system for opera singers

    Get PDF
    This artistic research project examines the artistic, technical, and pedagogical challenges of developing a performer-controlled interactive technology for real-time vocal processing of the operatic voice. As a classically trained singer-composer, I have explored ways to merge the compositional aspects of transforming electronic sound with the performative aspects of embodied singing. I set out to design, develop, and test a prototype for an interactive vocal processing system using sampling and audio processing methods. The aim was to foreground and accommodate an unamplified operatic voice interacting with the room's acoustics and the extended disembodied voices of the same performer. The iterative prototyping explored the performer's relationship to the acoustic space, the relationship between the embodied acoustic voice and disembodied processed voice(s), and the relationship to memory and time. One of the core challenges was to design a system that would accommodate mobility and allow interaction based on auditory and haptic cues rather than visual. In other words, a system allowing the singer to control their sonic output without standing behind a laptop. I wished to highlight and amplify the performer's agency with a system that would enable nuanced and variable vocal processing, be robust, teachable, and suitable for use in various settings: solo performances, various types and sizes of ensembles, and opera. This entailed mediating different needs, training, and working methods of both electronic music and opera practitioners. One key finding was that even simple audio processing could achieve complex musical results. The audio processes used were primarily combinations of feedback and delay lines. However, performers could get complex musical results quickly through continuous gestural control and the ability to route signals to four channels. This complexity sometimes led to surprising results, eliciting improvisatory responses also from singers without musical improvisation experience. The project has resulted in numerous vocal solo, chamber, and operatic performances in Norway, the Netherlands, Belgium, and the United States. The research contributes to developing emerging technologies for live electronic vocal processing in opera, developing the improvisational performance skills needed to engage with those technologies, and exploring alternatives for sound diffusion conducive to working with unamplified operatic voices. Links: Exposition and documentation of PhD research in Research Catalogue: Electrifying Opera, Amplifying Agency. Artistic results. Reflection and Public Presentations (PhD) (2023): https://www.researchcatalogue.net/profile/show-exposition?exposition=2222429 Home/Reflections: https://www.researchcatalogue.net/view/2222429/2222460 Mapping & Prototyping: https://www.researchcatalogue.net/view/2222429/2247120 Space & Speakers: https://www.researchcatalogue.net/view/2222429/2222430 Presentations: https://www.researchcatalogue.net/view/2222429/2247155 Artistic Results: https://www.researchcatalogue.net/view/2222429/222248

    Synesthetic: Composing Works for Marimba and Automated Lighting

    Get PDF
    This paper describes a series of explorations aimed at developing new modes of performance using percussion and computer-controlled lighting, linked by electronic sensing technology. Music and colour are often imagined to be related and parallels have been drawn between the colour spectrum and keyboard. Some people experience a condition, chromesthesia (a type of synesthesia), where experiences of colour and sound are linked in the brain. In our work, we sought to explore such links and render them on stage as part of a musical performance. Over the course of this project, tools and strategies were developed to create a performance work consisting of five short movements, each emphasising a different interactive strategy between the performer, lights, and composition. In this paper, we describe the tools created to support this work: a custom wearable lighting and sensing system, and microcontroller-based OSC to DMX lighting controller. We discuss each composition and how the interactions reflect ideas about synesthesia.Arts Council Norwa

    Design Strategies for Adaptive Social Composition: Collaborative Sound Environments

    Get PDF
    In order to develop successful collaborative music systems a variety of subtle interactions need to be identified and integrated. Gesture capture, motion tracking, real-time synthesis, environmental parameters and ubiquitous technologies can each be effectively used for developing innovative approaches to instrument design, sound installations, interactive music and generative systems. Current solutions tend to prioritise one or more of these approaches, refining a particular interface technology, software design or compositional approach developed for a specific composition, performer or installation environment. Within this diverse field a group of novel controllers, described as ‘Tangible Interfaces’ have been developed. These are intended for use by novices and in many cases follow a simple model of interaction controlling synthesis parameters through simple user actions. Other approaches offer sophisticated compositional frameworks, but many of these are idiosyncratic and highly personalised. As such they are difficult to engage with and ineffective for groups of novices. The objective of this research is to develop effective design strategies for implementing collaborative sound environments using key terms and vocabulary drawn from the available literature. This is articulated by combining an empathic design process with controlled sound perception and interaction experiments. The identified design strategies have been applied to the development of a new collaborative digital instrument. A range of technical and compositional approaches was considered to define this process, which can be described as Adaptive Social Composition. Dan Livingston

    Body as Instrument – Performing with Gestural Interfaces

    Full text link
    This paper explores the challenge of achieving nuanced control and physical engagement with gestural interfaces in performance. Performances with a prototype gestural performance system, Gestate, provide the basis for insights into the application of gestural systems in live contexts. These reflections stem from a performer's perspective, summarising the experience of prototyping and performing with augmented instruments that extend vocal or instrumental technique through gestural control. Successful implementation of rapidly evolving gestural technologies in real-time performance calls for new approaches to performing and musicianship, centred on a growing understanding of the body's physical and creative potential. For musicians hoping to incorporate gestural control seamlessly into their performance practice, a balance of technical mastery and kinaesthetic awareness is needed to adapt existing approaches to their own purposes. Within non-tactile systems, visual feedback mechanisms can support this process by providing explicit visual cues that compensate for the absence of haptic feedback. Experience gained through prototyping and performance can yield a deeper understanding of the broader nature of gestural control and the way in which performers inhabit their own bodies.4 page(s

    From Sine Waves to Soundscapes: Exploring the Art and Science of Analog Synthesizer Design

    Get PDF
    Senior Project submitted to The Division of Science, Mathematics and Computing of Bard College

    Advancing performability in playable media : a simulation-based interface as a dynamic score

    Get PDF
    ï»żï»żWhen designing playable media with non-game orientation, alternative play scenarios to gameplay scenarios must be accompanied by alternative mechanics to game mechanics. Problems of designing playable media with non-game orientation are stated as the problems of designing a platform for creative explorations and creative expressions. For such design problems, two requirements are articulated: 1) play state transitions must be dynamic in non-trivial ways in order to achieve a significant level of engagement, and 2) pathways for players’ experience from exploration to expression must be provided. The transformative pathway from creative exploration to creative expression is analogous to pathways for game players’ skill acquisition in gameplay. The paper first describes a concept of simulation-based interface, and then binds that concept with the concept of dynamic score. The former partially accounts for the first requirement, the latter the second requirement. The paper describes the prototype and realization of the two concepts’ binding. “Score” is here defined as a representation of cue organization through a transmodal abstraction. A simulation based interface is presented with swarm mechanics and its function as a dynamic score is demonstrated with an interactive musical composition and performance

    A voice operated musical instrument.

    Get PDF
    Many mathematical formulas and algorithms exist to identify pitches formed by human voices, and this has continued to be popular in the fields of music and signal pro-cessing. Other systems and research perform real time pitch identification implemented by using PCs with system clocks faster than 400MHz. This thesis explores developing an embedded RPTI system using the average magnitude difference function (AMDF), which will also use MIDI commands to control a synthesizer to track the pitch in near real time. The AMDF algorithm was simulated and its performance analyzed in MATLAB with pre-recorded sound files from a PC. Errors inherent to the AMDF and the hardware constraints led to noticeable pitch errors. The MATLAB code was optimized and its performance verified for the Motorola 68000 assembly language. This stage of development led to realization that the original design would have to change for the processing time required for the AMDF implementation. Hardware was constructed to support an 8MHz Motorola 68000, analog input, and MIDI communications. The various modules were constructed using Vectorbord© prototyping board with soldered tracks, wires and sockets. Modules were tested individually and as a whole unit. A design flaw was noticed with the final design, which caused the unit to fail during program execution while operating in a stand-alone mode. This design is a proof of concept for a product that can be improved upon with newer components, more advanced algorithms and hardware construction, and a more aesthetically pleasing package. Ultimately, hardware limitations imposed by the available equipment in addition to a hidden design flaw contributed to the failure of this stand-alone prototype
    • 

    corecore