4 research outputs found

    Technology and composition — an autoethnography on the influence of electronics on orchestration practice

    Get PDF
    This research explores novel methods of orchestration, focusing on the influence of electronics on my own orchestration practice. By drawing upon electronic music composition techniques and timbral-shaping tools, this project addresses the boundaries of orchestration and examines processes that inform orchestration decisions. Through the resulting portfolio, I explore timbral blend, spatialization and acoustics, real-time orchestration, computer-aided/assisted orchestration, and the extension of the timbral palette by rethinking the ideals of spectral composition. These methods aim to create unique sound worlds and audience experiences while situating my distinctive approach in relation to other existing practices. Furthermore, a supporting commentary illuminates the deep pre-compositional research that informs my orchestration practice by identifying the techniques and evaluating their application. To explore such concepts, it is vital to conduct practice-led autoethnographic research. This allows for full, creative exploration and application of site-specific and acoustic/electronic tools. Through recognizing the impact of electronics on my approach to orchestration, I have made exciting discoveries in this field by integrating electronic and non-electronic systems, forming what I regard as my orchestration discourse. The radical overhaul of my orchestration approach has served to highlight just how much more work there is to be made in the realm of human-machine creative collaboration and that sound has many more lessons to teach me. This research marks a ‘checkpoint’ of life-long research as contemporary arts and science work hand in hand. We cannot disregard the fact that the gap between the world of instrumental music and electronic music is still too unexplored in the timbral-based orchestration domain

    Computational Modeling and Analysis of Multi-timbral Musical Instrument Mixtures

    Get PDF
    In the audio domain, the disciplines of signal processing, machine learning, psychoacoustics, information theory and library science have merged into the field of Music Information Retrieval (Music-IR). Music-IR researchers attempt to extract high level information from music like pitch, meter, genre, rhythm and timbre directly from audio signals as well as semantic meta-data over a wide variety of sources. This information is then used to organize and process data for large scale retrieval and novel interfaces. For creating musical content, access to hardware and software tools for producing music has become commonplace in the digital landscape. While the means to produce music have become widely available, significant time must be invested to attain professional results. Mixing multi-channel audio requires techniques and training far beyond the knowledge of the average music software user. As a result, there is significant growth and development in intelligent signal processing for audio, an emergent field combining audio signal processing and machine learning for producing music. This work focuses on methods for modeling and analyzing multi-timbral musical instrument mixtures and performing automated processing techniques to improve audio quality based on quantitative and qualitative measures. The main contributions of the work involve training models to predict mixing parameters for multi-channel audio sources and developing new methods to model the component interactions of individual timbres to an overall mixture. Linear dynamical systems (LDS) are shown to be capable of learning the relative contributions of individual instruments to re-create a commercial recording based on acoustic features extracted directly from audio. Variations in the model topology are explored to make it applicable to a more diverse range of input sources and improve performance. An exploration of relevant features for modeling timbre and identifying instruments is performed. Using various basis decomposition techniques, audio examples are reconstructed and analyzed in a perceptual listening test to evaluate their ability to capture salient aspects of timbre. These tests show that a 2-D decomposition is able to capture much more perceptually relevant information with regard to the temporal evolution of the frequency spectrum of a set of audio examples. The results indicate that joint modeling of frequencies and their evolution is essential for capturing higher level concepts in audio that we desire to leverage in automated systems.Ph.D., Electrical Engineering -- Drexel University, 201

    A statistical model of timbre perception

    No full text
    We describe a perceptual space for timbre, define an objective metric that takes into account perceptual orthogonality and measure the quality of timbre interpolation. We discuss two timbre representations and measure perceptual judgments on an equivalent range of timbre variety. We determine that a timbre space based on Mel-frequency cepstral coefficients (MFCC) is a good model for a perceptual timbre space. 1
    corecore