2,236 research outputs found
Electrifying Opera, Amplifying Agency: Designing a performer-controlled interactive audio system for opera singers
This artistic research project examines the artistic, technical, and pedagogical challenges of developing a performer-controlled interactive technology for real-time vocal processing of the operatic voice. As a classically trained singer-composer, I have explored ways to merge the compositional aspects of transforming electronic sound with the performative aspects of embodied singing.
I set out to design, develop, and test a prototype for an interactive vocal processing system using sampling and audio processing methods. The aim was to foreground and accommodate an unamplified operatic voice interacting with the room's acoustics and the extended disembodied voices of the same performer. The iterative prototyping explored the performer's relationship to the acoustic space, the relationship between the embodied acoustic voice and disembodied processed voice(s), and the relationship to memory and time.
One of the core challenges was to design a system that would accommodate mobility and allow interaction based on auditory and haptic cues rather than visual. In other words, a system allowing the singer to control their sonic output without standing behind a laptop. I wished to highlight and amplify the performer's agency with a system that would enable nuanced and variable vocal processing, be robust, teachable, and suitable for use in various settings: solo performances, various types and sizes of ensembles, and opera. This entailed mediating different needs, training, and working methods of both electronic music and opera practitioners.
One key finding was that even simple audio processing could achieve complex musical results. The audio processes used were primarily combinations of feedback and delay lines. However, performers could get complex musical results quickly through continuous gestural control and the ability to route signals to four channels. This complexity sometimes led to surprising results, eliciting improvisatory responses also from singers without musical improvisation experience.
The project has resulted in numerous vocal solo, chamber, and operatic performances in Norway, the Netherlands, Belgium, and the United States. The research contributes to developing emerging technologies for live electronic vocal processing in opera, developing the improvisational performance skills needed to engage with those technologies, and exploring alternatives for sound diffusion conducive to working with unamplified operatic voices.
Links:
Exposition and documentation of PhD research in Research Catalogue: Electrifying Opera, Amplifying Agency. Artistic results. Reflection and Public Presentations (PhD) (2023):
https://www.researchcatalogue.net/profile/show-exposition?exposition=2222429
Home/Reflections:
https://www.researchcatalogue.net/view/2222429/2222460
Mapping & Prototyping:
https://www.researchcatalogue.net/view/2222429/2247120
Space & Speakers:
https://www.researchcatalogue.net/view/2222429/2222430
Presentations:
https://www.researchcatalogue.net/view/2222429/2247155
Artistic Results:
https://www.researchcatalogue.net/view/2222429/222248
Recommended from our members
Seeking out the spaces between: Using improvisation for collaborative composition and interactive technology
Copyright © 2010 ISASTThis article presents findings from experiments into piano performance live electronics undertaken by the author since early 2007. The use of improvisation has infused every step of the process---both as a methodology to obtain meaningful results using interactive technology and as a way to generate and characterize a collaborative musical space with composers. The technology used has included pre-built MIDI interfaces such as the PianoBar, actuators such as miniature DC motors and sensor interfaces including iCube and the Wii controller. Collaborators have included researchers at the Centre for Digital Music (QMUL), Richard Barrett, Pierre Alexandre Tremblay and Atau Tanaka. In seeking to create responsive âperformance environmentsâ at the piano, I explore live, performative control of electronics to create better connections for both performer (providing the same level of interpretive freedom as with a âpureâ instrumental performance) and audience (communicating clearly to them). I have been lucky to witness first-hand many live interactive performances and to work with various empathetic composers/performers in flexible working environments. Collaborating with experienced technologists and musicians, I have witnessed time and again what, for me, is a fundamental truth in interactive instrumental performance: As a living, spontaneous form it must be nurtured and informed by the performerâs physicality and imagination as much as by the creativity or knowledge of the composer and/or technologist. Specifically in the case of sensors, their dependence on the detail of each personâs body and reactions is so refined as to necessitate, I would argue, an entirely collaborative approach and therefore one that involves at least directed improvisation and, more likely, fairly extensive improvised exploration. The fundamentally personal and intimate nature of sensor readings---the amount of tension created by each performer, the shape of the ancillary gestures or the level of emotional involvement (especially relevant when using galvanic skin response or EEG)---makes creating pieces with sensors extremely difficult for a composer to do in isolation. Improvisation therefore provides a way for performer and composer to generate a common musical and gestural language. Related to these issues is the fact that the technical or notational parameters in interactive music are not yet (and may never be) standardized, thereby creating a very real and practical need for improvisation to figure at least somewhere in the process.This study is funded by the Arts and Humanities Research Council
Recommended from our members
Harmony and Technology Enhanced Learning
New technologies offer rich opportunities to support education in harmony. In this chapter we consider theoretical perspectives and underlying principles behind technologies for learning and teaching harmony. Such perspectives help in matching existing and future technologies to educational purposes, and to inspire the creative re-appropriation of technologies
Synesthetic: Composing Works for Marimba and Automated Lighting
This paper describes a series of explorations aimed at developing new modes of performance using percussion and computer-controlled lighting, linked by electronic sensing technology. Music and colour are often imagined to be related and parallels have been drawn between the colour spectrum and keyboard. Some people experience a condition, chromesthesia (a type of synesthesia), where experiences of colour and sound are linked in the brain. In our work, we sought to explore such links and render them on stage as part of a musical performance. Over the course of this project, tools and strategies were developed to create a performance work consisting of five short movements, each emphasising a different interactive strategy between the performer, lights, and composition. In this paper, we describe the tools created to support this work: a custom wearable lighting and sensing system, and microcontroller-based OSC to DMX lighting controller. We discuss each composition and how the interactions reflect ideas about synesthesia.Arts Council Norwa
Design Strategies for Adaptive Social Composition: Collaborative Sound Environments
In order to develop successful collaborative music systems a variety
of subtle interactions need to be identified and integrated. Gesture
capture, motion tracking, real-time synthesis, environmental
parameters and ubiquitous technologies can each be effectively used
for developing innovative approaches to instrument design, sound
installations, interactive music and generative systems. Current
solutions tend to prioritise one or more of these approaches, refining
a particular interface technology, software design or compositional
approach developed for a specific composition, performer or
installation environment. Within this diverse field a group of novel
controllers, described as âTangible Interfacesâ have been developed.
These are intended for use by novices and in many cases follow a
simple model of interaction controlling synthesis parameters through
simple user actions. Other approaches offer sophisticated
compositional frameworks, but many of these are idiosyncratic and
highly personalised. As such they are difficult to engage with and
ineffective for groups of novices. The objective of this research is to
develop effective design strategies for implementing collaborative
sound environments using key terms and vocabulary drawn from the
available literature. This is articulated by combining an empathic
design process with controlled sound perception and interaction
experiments. The identified design strategies have been applied to
the development of a new collaborative digital instrument. A range
of technical and compositional approaches was considered to define
this process, which can be described as Adaptive Social Composition.
Dan Livingston
Body as Instrument â Performing with Gestural Interfaces
This paper explores the challenge of achieving nuanced control and physical engagement with gestural interfaces in performance. Performances with a prototype gestural performance system, Gestate, provide the basis for insights into the application of gestural systems in live contexts. These reflections stem from a performer's perspective, summarising the experience of prototyping and performing with augmented instruments that extend vocal or instrumental technique through gestural control. Successful implementation of rapidly evolving gestural technologies in real-time performance calls for new approaches to performing and musicianship, centred on a growing understanding of the body's physical and creative potential. For musicians hoping to incorporate gestural control seamlessly into their performance practice, a balance of technical mastery and kinaesthetic awareness is needed to adapt existing approaches to their own purposes. Within non-tactile systems, visual feedback mechanisms can support this process by providing explicit visual cues that compensate for the absence of haptic feedback. Experience gained through prototyping and performance can yield a deeper understanding of the broader nature of gestural control and the way in which performers inhabit their own bodies.4 page(s
From Sine Waves to Soundscapes: Exploring the Art and Science of Analog Synthesizer Design
Senior Project submitted to The Division of Science, Mathematics and Computing of Bard College
Advancing performability in playable media : a simulation-based interface as a dynamic score
ï»żï»żWhen designing playable media with non-game orientation, alternative play scenarios to gameplay scenarios must be accompanied by alternative mechanics to game mechanics. Problems of designing playable media with non-game orientation are stated as the problems of designing a platform for creative explorations and creative expressions. For such design problems, two requirements are articulated: 1) play state transitions must be dynamic in non-trivial ways in order to achieve a significant level of engagement, and 2) pathways for playersâ experience from exploration to expression must be provided. The transformative pathway from creative exploration to creative expression is analogous to pathways for game playersâ skill acquisition in gameplay. The paper first describes a concept of simulation-based interface, and then binds that concept with the concept of dynamic score. The former partially accounts for the first requirement, the latter the second requirement. The paper describes the prototype and realization of the two conceptsâ binding. âScoreâ is here defined as a representation of cue organization through a transmodal abstraction. A simulation based interface is presented with swarm mechanics and its function as a dynamic score is demonstrated with an interactive musical composition and performance
A voice operated musical instrument.
Many mathematical formulas and algorithms exist to identify pitches formed by human voices, and this has continued to be popular in the fields of music and signal pro-cessing. Other systems and research perform real time pitch identification implemented by using PCs with system clocks faster than 400MHz. This thesis explores developing an embedded RPTI system using the average magnitude difference function (AMDF), which will also use MIDI commands to control a synthesizer to track the pitch in near real time. The AMDF algorithm was simulated and its performance analyzed in MATLAB with pre-recorded sound files from a PC. Errors inherent to the AMDF and the hardware constraints led to noticeable pitch errors. The MATLAB code was optimized and its performance verified for the Motorola 68000 assembly language. This stage of development led to realization that the original design would have to change for the processing time required for the AMDF implementation. Hardware was constructed to support an 8MHz Motorola 68000, analog input, and MIDI communications. The various modules were constructed using Vectorbord© prototyping board with soldered tracks, wires and sockets. Modules were tested individually and as a whole unit. A design flaw was noticed with the final design, which caused the unit to fail during program execution while operating in a stand-alone mode. This design is a proof of concept for a product that can be improved upon with newer components, more advanced algorithms and hardware construction, and a more aesthetically pleasing package. Ultimately, hardware limitations imposed by the available equipment in addition to a hidden design flaw contributed to the failure of this stand-alone prototype
- âŠ