839 research outputs found

    Six Minute Snake Bite for solo bass clarinet and backing track

    Get PDF
    Six Minute Snake Bite is a composition for bass clarinet and electronic audio track. The goal was to create a piece that exhibits intensity and aggressiveness generated through the use of computerized sounds, and fuses the concert hall tradition with the sounds of progressive rock. In order to achieve this, I modeled the piece off of pieces by dance and rock music groups such as Dub Trio and Death Grips by experimenting with the compositional tools, strategies, and sounds that these artists used. I also sought out information from the performer, Asher Carlson, on specific ways to achieve these results through the bass clarinet. Six Minute Snake Bite was written specifically for Carlson, who, through collaboration and questioning, guided the use of extended techniques, as well as the development of more traditional musical passages. The concert hall tradition is represented in the piece through its structural elements and approach to motivic development

    String Sing-Along (Sympathetic Vibration) Is Not the Key to Banjo Sound

    Get PDF
    Among instruments that do not sport explicit sympathetic strings, banjos produce particularly strong sympathetic and other secondary sounds arising from interactions among the melody strings. That certainly contributes to the characteristic timbre and is easily demonstrated and explained. However, a sampling-synthesis experiment suggests that it is not an essential feature of what distinguishes banjo sound from other acoustic, plucked instruments

    A Parametric Sound Object Model for Sound Texture Synthesis

    Get PDF
    This thesis deals with the analysis and synthesis of sound textures based on parametric sound objects. An overview is provided about the acoustic and perceptual principles of textural acoustic scenes, and technical challenges for analysis and synthesis are considered. Four essential processing steps for sound texture analysis are identifi ed, and existing sound texture systems are reviewed, using the four-step model as a guideline. A theoretical framework for analysis and synthesis is proposed. A parametric sound object synthesis (PSOS) model is introduced, which is able to describe individual recorded sounds through a fi xed set of parameters. The model, which applies to harmonic and noisy sounds, is an extension of spectral modeling and uses spline curves to approximate spectral envelopes, as well as the evolution of parameters over time. In contrast to standard spectral modeling techniques, this representation uses the concept of objects instead of concatenated frames, and it provides a direct mapping between sounds of diff erent length. Methods for automatic and manual conversion are shown. An evaluation is presented in which the ability of the model to encode a wide range of di fferent sounds has been examined. Although there are aspects of sounds that the model cannot accurately capture, such as polyphony and certain types of fast modulation, the results indicate that high quality synthesis can be achieved for many different acoustic phenomena, including instruments and animal vocalizations. In contrast to many other forms of sound encoding, the parametric model facilitates various techniques of machine learning and intelligent processing, including sound clustering and principal component analysis. Strengths and weaknesses of the proposed method are reviewed, and possibilities for future development are discussed

    Acoustics of the banjo: measurements and sound synthesis

    Get PDF
    Measurements of vibrational response of an American 5-string banjo and of the sounds of played notes on the instrument are presented, and contrasted with corresponding results for a steel-string guitar. A synthesis model, fine-tuned using information from the measurements, has been used to investigate what acoustical features are necessary to produce recognisable banjo-like sound, and to explore the perceptual salience of a wide range of design modifications. Recognisable banjo sound seems to depend on the pattern of decay rates of “string modes”, the loudness magnitude and profile, and a transient contribution to each played note from the “body modes”. A formant-like feature, peaking around 500–800 Hz on the banjo tested, is found to play a key role. At higher frequencies the dynamic behaviour of the bridge produces additional formant-like features, reminiscent of the “bridge hill” of the violin, and these also produce clear perceptual effects

    Hybrid sparse and low-rank time-frequency signal decomposition

    Get PDF
    International audienceWe propose a new hybrid (or morphological) generative model that decomposes a signal into two (and possibly more) layers. Each layer is a linear combination of localised atoms from a time-frequency dictionary. One layer has a low-rank time-frequency structure while the other as a sparse structure. The time-frequency resolutions of the dictionaries describing each layer may be different. Our contribution builds on the recently introduced Low-Rank Time-Frequency Synthesis (LRTFS) model and proposes an iterative algorithm similar to the popular iterative shrinkage/thresholding algorithm. We illustrate the capacities of the proposed model and estimation procedure on a tonal + transient audio decomposition example. Index Terms— Low-rank time-frequency synthesis, sparse component analysis, hybrid/morphological decom-positions, non-negative matrix factorisation

    Real-time Sound Source Separation For Music Applications

    Get PDF
    Sound source separation refers to the task of extracting individual sound sources from some number of mixtures of those sound sources. In this thesis, a novel sound source separation algorithm for musical applications is presented. It leverages the fact that the vast majority of commercially recorded music since the 1950s has been mixed down for two channel reproduction, more commonly known as stereo. The algorithm presented in Chapter 3 in this thesis requires no prior knowledge or learning and performs the task of separation based purely on azimuth discrimination within the stereo field. The algorithm exploits the use of the pan pot as a means to achieve image localisation within stereophonic recordings. As such, only an interaural intensity difference exists between left and right channels for a single source. We use gain scaling and phase cancellation techniques to expose frequency dependent nulls across the azimuth domain, from which source separation and resynthesis is carried out. The algorithm is demonstrated to be state of the art in the field of sound source separation but also to be a useful pre-process to other tasks such as music segmentation and surround sound upmixing

    Re-Sonification of Objects, Events, and Environments

    Get PDF
    abstract: Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.Dissertation/ThesisPh.D. Electrical Engineering 201
    • …
    corecore