1,726 research outputs found
Introducing CatOracle: Corpus-based concatenative improvisation with the Audio Oracle algorithm
CATORACLE responds to the need to join high-level control of audio timbre with the organization of musical form in time. It is inspired by two powerful existing tools: CataRT for corpus-based concatenative synthesis based on the MUBU for MAX library, and PYORACLE for computer improvisation, combining for the first time audio descriptor analysis and learning and generation of musical structures. Harnessing a user-defined list of audio fea- tures, live or prerecorded audio is analyzed to construct an “Audio Oracle” as a basis for improvisation. CatOracle also extends features of classic concatenative synthesis to include live interactive audio mosaicking and score-based transcription using the BACH library for MAX. The project suggests applications not only to live performance of written and improvised electroacoustic music, but also computer-assisted composition and musical analysis
Electrifying Opera, Amplifying Agency: Designing a performer-controlled interactive audio system for opera singers
This artistic research project examines the artistic, technical, and pedagogical challenges of developing a performer-controlled interactive technology for real-time vocal processing of the operatic voice. As a classically trained singer-composer, I have explored ways to merge the compositional aspects of transforming electronic sound with the performative aspects of embodied singing.
I set out to design, develop, and test a prototype for an interactive vocal processing system using sampling and audio processing methods. The aim was to foreground and accommodate an unamplified operatic voice interacting with the room's acoustics and the extended disembodied voices of the same performer. The iterative prototyping explored the performer's relationship to the acoustic space, the relationship between the embodied acoustic voice and disembodied processed voice(s), and the relationship to memory and time.
One of the core challenges was to design a system that would accommodate mobility and allow interaction based on auditory and haptic cues rather than visual. In other words, a system allowing the singer to control their sonic output without standing behind a laptop. I wished to highlight and amplify the performer's agency with a system that would enable nuanced and variable vocal processing, be robust, teachable, and suitable for use in various settings: solo performances, various types and sizes of ensembles, and opera. This entailed mediating different needs, training, and working methods of both electronic music and opera practitioners.
One key finding was that even simple audio processing could achieve complex musical results. The audio processes used were primarily combinations of feedback and delay lines. However, performers could get complex musical results quickly through continuous gestural control and the ability to route signals to four channels. This complexity sometimes led to surprising results, eliciting improvisatory responses also from singers without musical improvisation experience.
The project has resulted in numerous vocal solo, chamber, and operatic performances in Norway, the Netherlands, Belgium, and the United States. The research contributes to developing emerging technologies for live electronic vocal processing in opera, developing the improvisational performance skills needed to engage with those technologies, and exploring alternatives for sound diffusion conducive to working with unamplified operatic voices.
Links:
Exposition and documentation of PhD research in Research Catalogue: Electrifying Opera, Amplifying Agency. Artistic results. Reflection and Public Presentations (PhD) (2023):
https://www.researchcatalogue.net/profile/show-exposition?exposition=2222429
Home/Reflections:
https://www.researchcatalogue.net/view/2222429/2222460
Mapping & Prototyping:
https://www.researchcatalogue.net/view/2222429/2247120
Space & Speakers:
https://www.researchcatalogue.net/view/2222429/2222430
Presentations:
https://www.researchcatalogue.net/view/2222429/2247155
Artistic Results:
https://www.researchcatalogue.net/view/2222429/222248
Composing Music for Acoustic Instruments and Electronics Mediated Through the Application of Microsound
This project seeks to extend, through a portfolio of compositions, the use of microsound to mixed works incorporating acoustic instrument and electronics. Issues relating to the notation of microsound when used with acoustic instruments are explored and the adoption of a clear and intuitive system of graphical notation is proposed. The design of the performance environment for the electroacoustic part is discussed and different models for the control of the electronics are considered. Issues relating to structure and form when applied to compositions that mix note-based material with texture-based material are also considered. A framework based on a pure sound/noise continuum, used in conjunction with a hierarchy of gestural archetypes, is adopted as a possible solution to the challenges of structuring mixed compositions. Gestural and textural relationships between different parts of the compositions are also explored and the use of extended instrumental techniques to create continua between the acoustic and the electroacoustic is adopted. The role of aleatoric techniques and improvisation in both the acoustic and the electroacoustic parts are explored through adoption of an interactive performance environment incorporating a pitch-tracking algorithm. Finally, the advantages and disadvantages of real time recording and processing of the electronic part when compared with live processing of pre-existing sound-files are discussed
Towards musical interaction : 'Schismatics' for e-violin and computer.
This paper discusses the evolution of the Max/MSP
patch used in schismatics (2007, rev. 2010) for electric
violin (Violectra) and computer, by composer Sam
Hayden in collaboration with violinist Mieko Kanno.
schismatics involves a standard performance paradigm
of a fixed notated part for the e-violin with sonically unfixed
live computer processing. Hayden was unsatisfied
with the early version of the piece: the use of attack
detection on the live e-violin playing to trigger stochastic
processes led to an essentially reactive behaviour in the
computer, resulting in a somewhat predictable one-toone
sonic relationship between them. It demonstrated
little internal relationship between the two beyond an
initial e-violin ‘action’ causing a computer ‘event’. The
revisions in 2010, enabled by an AHRC Practice-Led
research award, aimed to achieve 1) a more interactive
performance situation and 2) a subtler and more
‘musical’ relationship between live and processed
sounds. This was realised through the introduction of
sound analysis objects, in particular machine listening
and learning techniques developed by Nick Collins. One
aspect of the programming was the mapping of analysis
data to synthesis parameters, enabling the computer
transformations of the e-violin to be directly related to
Kanno’s interpretation of the piece in performance
Improvisatory music and painting interface
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (p. 101-104).(cont.) theoretical section is accompanied by descriptions of historic and contemporary works that have influenced IMPI.Shaping collective free improvisations in order to obtain solid and succinct works with surprising and synchronized events is not an easy task. This thesis is a proposal towards that goal. It presents the theoretical, philosophical and technical framework of the Improvisatory Music and Painting Interface (IMPI) system: a new computer program for the creation of audiovisual improvisations performed in real time by ensembles of acoustic musicians. The coordination of these improvisations is obtained using a graphical language. This language is employed by one "conductor" in order to generate musical scores and abstract visual animations in real time. Doodling on a digital tablet following the syntax of the language allows both the creation of musical material with different levels of improvisatory participation from the ensemble and also the manipulation of the projected graphics in coordination with the music. The generated musical information is displayed in several formats on multiple computer screens that members of the ensemble play from. The digital graphics are also projected on a screen to be seen by an audience. This system is intended for a non-tonal, non-rhythmic, and texture-oriented musical style, which means that strong emphasis is put on the control of timbral qualities and continuum transitions. One of the main goals of the system is the translation of planned compositional elements (such as precise structure and synchronization between instruments) into the improvisatory domain. The graphics that IMPI generates are organic, fluid, vivid, dynamic, and unified with the music. The concept of controlled improvisation as well as the paradigm of the relationships between acoustic and visual material are both analyzed from an aesthetic point of view. TheHugo Solís García.S.M
My Physical Approach to Musique Concrete Composition Portfolio of Studio Works
My recent practice-based research explores the creative potential of physical manipulation of sound in the composition of sound-based electronic music. Focusing on the poietic aspect of my music making, this commentary discusses the composition process of three musical works: Comme si la foudre pouvait durer, Igaluk - To Scare the Moon with its own Shadow and desert. It also examines the development of a software instrument, fXfD, along with its resulting musical production. Finally, it discusses the recent musical production of an improvisation duet in which I take part, Tout Croche.
In the creative process of this portfolio, the appreciation for sound is the catalyst of the musical decisions. In other words, the term \musique concrete" applies to my practice, as sound is the central concern that triggers the composition act. In addition to anecdotal, typo-morphological and functional concerns, the presence of a \trace of physicality" in a sound is, more than ever, what convinces me of its musical potential. In order to compose such sounds, a back-and-forth process between theoretical knowledge and sound manipulations will be defined and developed under the concept of \sonic empiricism."
In a desire to break with the cumbersome nature of studio-based composition work, approaches to sound-based electronic music playing were researched. Through the diferent musical projects, various digital instruments were conceived. In a case study, the text reviews them through their sound generation, gestural control and mapping components. I will also state personal preferences in the ways sound manipulations are performed. In the light of the observations made, the studio emerges as the central instrument upon which my research focuses. The variety of resources it provides for the production and control of sound confers the status of polymorphic instrument on the studio.
The text concludes by reflecting on the possibilities of improvisation and performance that the studio offers when it is considered as an embodied polymorphic instrument. A concluding statement on the specific ear training needed for such a studio practice bridges the concepts of sound selection and digital instruments herein exposed
Recommended from our members
Harmony and Technology Enhanced Learning
New technologies offer rich opportunities to support education in harmony. In this chapter we consider theoretical perspectives and underlying principles behind technologies for learning and teaching harmony. Such perspectives help in matching existing and future technologies to educational purposes, and to inspire the creative re-appropriation of technologies
- …