5,031 research outputs found

    Novel interfaces for controlling sound effects and physical models

    Get PDF

    Composing Music for Acoustic Instruments and Electronics Mediated Through the Application of Microsound

    Get PDF
    This project seeks to extend, through a portfolio of compositions, the use of microsound to mixed works incorporating acoustic instrument and electronics. Issues relating to the notation of microsound when used with acoustic instruments are explored and the adoption of a clear and intuitive system of graphical notation is proposed. The design of the performance environment for the electroacoustic part is discussed and different models for the control of the electronics are considered. Issues relating to structure and form when applied to compositions that mix note-based material with texture-based material are also considered. A framework based on a pure sound/noise continuum, used in conjunction with a hierarchy of gestural archetypes, is adopted as a possible solution to the challenges of structuring mixed compositions. Gestural and textural relationships between different parts of the compositions are also explored and the use of extended instrumental techniques to create continua between the acoustic and the electroacoustic is adopted. The role of aleatoric techniques and improvisation in both the acoustic and the electroacoustic parts are explored through adoption of an interactive performance environment incorporating a pitch-tracking algorithm. Finally, the advantages and disadvantages of real time recording and processing of the electronic part when compared with live processing of pre-existing sound-files are discussed

    Real-time Timbre Transfer and Sound Synthesis using DDSP

    Get PDF
    Neural audio synthesis is an actively researched topic, having yielded a wide range of techniques that leverages machine learning architectures. Google Magenta elaborated a novel approach called Differential Digital Signal Processing (DDSP) that incorporates deep neural networks with preconditioned digital signal processing techniques, reaching state-of-the-art results especially in timbre transfer applications. However, most of these techniques, including the DDSP, are generally not applicable in real-time constraints, making them ineligible in a musical workflow. In this paper, we present a real-time implementation of the DDSP library embedded in a virtual synthesizer as a plug-in that can be used in a Digital Audio Workstation. We focused on timbre transfer from learned representations of real instruments to arbitrary sound inputs as well as controlling these models by MIDI. Furthermore, we developed a GUI for intuitive high-level controls which can be used for post-processing and manipulating the parameters estimated by the neural network. We have conducted a user experience test with seven participants online. The results indicated that our users found the interface appealing, easy to understand, and worth exploring further. At the same time, we have identified issues in the timbre transfer quality, in some components we did not implement, and in installation and distribution of our plugin. The next iteration of our design will address these issues. Our real-time MATLAB and JUCE implementations are available at https://github.com/SMC704/juce-ddsp and https://github.com/SMC704/matlab-ddsp , respectively

    PHYSMISM:a control interface for creative exploration of physical models

    Get PDF
    In this paper we describe the design and implementation ofthe PHYSMISM: an interface for exploring the possibilitiesfor improving the creative use of physical modelling soundsynthesis.The PHYSMISM is implemented in a software and hardware version. Moreover, four different physical modellingtechniques are implemented, to explore the implications ofusing and combining different techniques.In order to evaluate the creative use of physical models,a test was performed using 11 experienced musicians as testsubjects. Results show that the capability of combining thephysical models and the use of a physical interface engagedthe musicians in creative exploration of physical models

    Composing with Microsound: An Approach to Structure and Form when Composing for Acoustic Instruments with Electronics

    Get PDF
    This paper explores the implications of using microsound as an organising principle when structuring composition for acoustic instruments and electronics. The ideas are presented in the context of a composition by the author for bass clarinet, flute, piano and electronics: The Sea Turns Sand To Stone (2015). After giving a definition of microsound, the compositional affordances of microsound are considered. Microsound is presented as an aesthetically rich tool for creating cohesion between acoustic and electroacoustic sounds and different parameters for manipulating the sounds are presented. Issues of structure and form are discussed and the challenges of creating a coherent environment that uses both note-based and texture-based material are explored. The implications of applying different models of form to mixed compositions are considered. This leads to a discussion of the different relationships that exist between the acoustic and the electroacoustic parts of a composition. Extended instrumental techniques provide one way of creating perceptual links between the acoustic and the electroacoustic. Examples of the way such techniques have been used in conjunction with microsound to impose a structural framework on The Sea Turns Sand To Stone are given. Finally, the use of a pure sound/noise axis, mediated through the application of microsound, is presented as a viable organising principle for structuring mixed compositions. The implications of such a model are explored and the underlying structure of The Sea Turns Sand To Stone is presented as a practical example of the application of the process

    DDX7: Differentiable FM Synthesis of Musical Instrument Sounds

    Get PDF
    FM Synthesis is a well-known algorithm used to generate complex timbre from a compact set of design primitives. Typically featuring a MIDI interface, it is usually impractical to control it from an audio source. On the other hand, Differentiable Digital Signal Processing (DDSP) has enabled nuanced audio rendering by Deep Neural Networks (DNNs) that learn to control differentiable synthesis layers from arbitrary sound inputs. The training process involves a corpus of audio for supervision, and spectral reconstruction loss functions. Such functions, while being great to match spectral amplitudes, present a lack of pitch direction which can hinder the joint optimization of the parameters of FM synthesizers. In this paper, we take steps towards enabling continuous control of a well-established FM synthesis architecture from an audio input. Firstly, we discuss a set of design constraints that ease spectral optimization of a differentiable FM synthesizer via a standard reconstruction loss. Next, we present Differentiable DX7 (DDX7), a lightweight architecture for neural FM resynthesis of musical instrument sounds in terms of a compact set of parameters. We train the model on instrument samples extracted from the URMP dataset, and quantitatively demonstrate its comparable audio quality against selected benchmarks
    corecore