385 research outputs found

    Perceptually smooth timbral guides by state-space analysis of phase-vocoder parameters

    Get PDF
    Sculptor is a phase-vocoder-based package of programs that allows users to explore timbral manipulation of sound in real time. It is the product of a research program seeking ultimately to perform gestural capture by analysis of the sound a performer makes using a conventional instrument. Since the phase-vocoder output is of high dimensionality — typically more than 1,000 channels per analysis frame—mapping phase-vocoder output to appropriate input parameters for a synthesizer is only feasible in theory

    On the Development of C++ Instruments

    Get PDF
    This paper brings together some ideas regardingcomputer music instrument development with re-spect to the C++ language. It looks at these fromtwo perspectives, that of the development of self-contained instruments with the use of a class libraryand that of programming of plugin modules for amusic programming system. Working code exam-ples illustrate the paper throughout

    Streaming Frequency-domain DAFX in Csound5

    Get PDF
    This article discusses the implementation of frequency domain digital audio effects using the Csound 5 music programming language, with its streaming frequency-domain signal (fsig) framework. Introduced to Csound 4.13, by Richard Dobson, it was further extended by Victor Lazzarini in version 5. The latest release of Csound incorporates a variety of new opcodes for different types of spectral manipulations. This article introduces the fsig framework and the analysis and resynthesis unit generators. It describes in detail the different types of spectral DAFx made possible by these new opcodes

    Streaming Spectral Processing with Consumer-level Graphics Processing Units

    Get PDF
    This paper describes the implementation of a streaming spectral processing system for realtime audio in a consumer-level onboard GPU (Graphics Processing Unit) attached to an off-the-shelf laptop computer. It explores the implementation of four processes: standard phase vocoder analysis and synthesis, additive synthesis and the sliding phase vocoder. These were developed under the CUDA development environment as plugins for the Csound 6 audio programming language. Following a detailed exposition of the GPU code, results of performance tests are discussed for each algorithm. They demonstrate that such a system is capable of realtime audio, even under the restrictions imposed by a limited GPU capability

    Streaming Spectral Processing with Consumer-level Graphics Processing Units

    Get PDF
    This paper describes the implementation of a streaming spectral processing system for realtime audio in a consumer-level onboard GPU (Graphics Processing Unit) attached to an off-the-shelf laptop computer. It explores the implementation of four processes: standard phase vocoder analysis and synthesis, additive synthesis and the sliding phase vocoder. These were developed under the CUDA development environment as plugins for the Csound 6 audio programming language. Following a detailed exposition of the GPU code, results of performance tests are discussed for each algorithm. They demonstrate that such a system is capable of realtime audio, even under the restrictions imposed by a limited GPU capability

    The Csound Plugin Opcode Framework

    Get PDF
    This article introduces the Csound Plugin Opcode Frame-work (CPOF), which aims to provide a simple lightweightC++ framework for the development of new unit genera-tors for Csound. The original interface for this type workis provided in the C language and it still provides the mostcomplete set of components to cover all possible require-ments. CPOF attempts to allow a simpler and more eco-nomical approach to creating plugin opcodes. The paperexplores the fundamental characteristics of the frameworkand how it is used in practice. The helper classes that areincluded in CPOF are presented with examples. Finally,we look at some uses in the Csound source codebase

    Portfolio of Compositions with Commentaries

    Get PDF
    This portfolio analyses the creative means by which a number of audio and visual compositions were realised. It attempts to dissect the influential factors in the creation of such pieces and to explore the technological processes involved in the creation of such work. It is a personal analysis of a body of work which represents a hybrid of influences, spanning several years. It is supported by three DVDs, which contain audio and visual material and software files which were used in the composition and performance of these works

    Portfolio of Electroacoustic Compositions with Commentaries

    Get PDF
    This portfolio consists of electroacoustic compositions which were primarily realised through the use of corporeally informed compositional practices. The manner in which a composer interacts with the compositional tools and musical materials at their disposal is a defining factor in the creation of musical works. Although the use of computers in the practice of electroacoustic composition has extended the range of sonic possibilities afforded to composers, it has also had a negative impact on the level of physical interaction that composers have with these musical materials. This thesis is an investigation into the use of mediation technologies with the aim of circumventing issues relating to the physical performance of electroacoustic music. This line of inquiry has led me to experiment with embedded computers, wearable technologies, and a range of various sensors. The specific tools that were used in the creation of the pieces within this portfolio are examined in detail within this thesis. I also provide commentaries and analysis of the eleven electroacoustic works which comprise this portfolio, describing the thought processes that led to their inception, the materials used in their creation, and the tools and techniques that I employed throughout the compositional process

    Artificial Simulation of Audio Spatialisation: Developing a Binaural System

    Get PDF
    Sound localisation deals with how and why we can locate sound sources in our spatial environment. Sound spatialisation defines how sound is distributed in this environment. Several acoustic and psychoacoustic phenomena are involved in sound localisation and spatialisation. The importance of these phenomena becomes apparent when endeavouring to recreate and emulate auditory spatial events using computers
    • …
    corecore