256 research outputs found

    A Linear Hybrid Sound Generation of Musical Instruments using Temporal and Spectral Shape Features

    Get PDF
    The generation of a hybrid musical instrument sound using morphing has always been an area of great interest to the music world. The proposed method exploits the temporal and spectral shape features of the sound for this purpose. For an effective morphing the temporal and spectral features are found as they can capture the most perceptually salient dimensions of timbre perception, namely, the attack time and the distribution of spectral energy. A wide variety of sound synthesis algorithms is currently available. Sound synthesis methods have become more computationally efficient. Wave table synthesis is widely adopted by digital sampling instruments or samplers. The Over Lap Add method (OLA) refers to a family of algorithms that produce a signal by properly assembling a number of signal segments. In granular synthesis sound is considered as a sequence with overlaps of elementary acoustic elements called grains. The simplest morph is a cross-fade of amplitudes in the time domain which can be obtained through cross synthesis. A hybrid sound is generated with all these methods to find out which method gives the most linear morph. The result will be evaluated as an error measure which is the difference between the calculated and interpolated features. The extraction of morph in a perceptually pleasant manner is the ultimate requirement of the work. DOI: 10.17762/ijritcc2321-8169.16045

    Sound morphing by feature interpolation

    Full text link

    Sound morphing strategies based on alterations of time-frequency representations by Gabor multipliers

    No full text
    International audienceSounds morphing is an important topic in signal processing of musical sounds and covers a wide variety of techniques whose aim is to "interpolate" between two sound signals. We present here an approach based on the alteration of time-frequency representation. Time-frequency analysis is a classical tool in sounds analysis/synthesis. A time-frequency filter can be well-defined as a diagonal signal operator in a Gabor representation of sounds. Processing can be performed by multiplying a time-frequency representation with such a time-frequency filter, called a Gabor mask. After estimating such a Gabor mask between two sounds, we explore strategies to parametrize it for static morphing between two sounds. We then compare such an approach with standard and non standard approaches of morphing as different kind of sounds combination, notably classical means in the time-frequency domain

    Dynamics of a hybrid morphing wing with active open loop vibrating trailing edge by Time-Resolved PIV and force measures

    Get PDF
    A quantitative characterization of the effects obtained by high frequency-low amplitude trailing edge actuation is presented. Particle image velocimetry, pressure and aerodynamic forces measurements are carried out on a wing prototype equipped with shape memory alloys and trailing edge piezoelectric-actuators, allowing simultaneously high deformations (bending) in low frequency and higher-frequency vibrations. The effects of this hybrid morphing on the forces have been quantified and an optimal actuation range has been identified, able to increase lift and decrease drag. The present study focuses more specifically on the effects of the higher-frequency vibrations of the trailing edge region. This actuation allows manipulation of the wake turbulent structures. It has been shown that specific frequency and amplitude ranges achieved by the piezoelectric actuators are able to produce a breakdown of larger coherent eddies by means of upscale energy transfer from smaller-scale eddies in the near wake. It results a thinning of the shear layers and the wake's width, associated to reduction of the form drag, as well as a reduction of predominant frequency peaks of the shear-layer instability. These effects have been shown by means of frequency domain analysis and Proper Orthogonal Decomposition

    Interactive computer music: a performer\u27s guide to issues surrounding Kyma with live clarinet input

    Get PDF
    Musicians are familiar with interaction in rehearsal and performance of music. Technology has become sophisticated and affordable to the point where interaction with a computer in real time performance is also possible. The nature of live interactive electronic music has blurred the distinction between the formerly exclusive realm of composition and that of performance. It is quite possible for performers to participate in the genre but currently little information is available for those wishing to explore it. This written document contains a definition of interaction, discussion on how it occurs in traditional music-making and a brief history of the emergence of live interaction in computer music. It also discusses the concept of live interaction, its aesthetic value, and highlights the possibilities of live interactive computer music using clarinet and the Kyma system, revealing ways a performer may maximize the interactive experience. The document, written from a player\u27s perspective, contains descriptions of possible methods of interaction with Kyma and live clarinet input divided into two areas: the clarinet can be used as a controller and the clarinet can be used as a source of sound. Information upon technical issues such as the speaker system, performance-space acoustics and diffusion options, possible interactive inputs, and specifically on microphone choices for clarinet is provided. There is little information for musicians contemplating the use of Kyma; specifically clarinetists will find in this paper a practical guide to many aspects of live electronic interaction and be better informed to explore the field. This area has the potential to expand not only our performing opportunities, but might increase economic development. Application of interactive music technology can be used in a traditional recital and for collaborative work with other art forms, installation projects and even music therapy. Knowledge of these programs also opens possibilities for sound design in theatre, film and other commercial applications

    THE RISE OF WAVETABLE SYNTHESIS IN COMMERCIAL MUSIC AND ITS CREATIVE APPLICATIONS

    Get PDF
    Wavetable synthesis is a powerful tool for music creation that helps composers and producers develop their own unique sounds. Though wavetable synthesis has been utilized in music since the early 1980s, advancements in computer technologies in the 2000s and the subsequent releases of software synthesizers in the late 2000s and early 2010s has led to the increased presence of wavetable synthesis in commercial music. This thesis chronicles a historical overview of the use of wavetable synthesis in commercial music and demonstrates the accessibility and power that wavetable synthesis delivers in music creation. The demonstration portion of this thesis features two original compositions in the style of electronic dance music (EDM) that prominently incorporate original wavetable instruments created from recordings of two motorized vehicles, as well as an overview of the processes of their creation

    A Parametric Sound Object Model for Sound Texture Synthesis

    Get PDF
    This thesis deals with the analysis and synthesis of sound textures based on parametric sound objects. An overview is provided about the acoustic and perceptual principles of textural acoustic scenes, and technical challenges for analysis and synthesis are considered. Four essential processing steps for sound texture analysis are identifi ed, and existing sound texture systems are reviewed, using the four-step model as a guideline. A theoretical framework for analysis and synthesis is proposed. A parametric sound object synthesis (PSOS) model is introduced, which is able to describe individual recorded sounds through a fi xed set of parameters. The model, which applies to harmonic and noisy sounds, is an extension of spectral modeling and uses spline curves to approximate spectral envelopes, as well as the evolution of parameters over time. In contrast to standard spectral modeling techniques, this representation uses the concept of objects instead of concatenated frames, and it provides a direct mapping between sounds of diff erent length. Methods for automatic and manual conversion are shown. An evaluation is presented in which the ability of the model to encode a wide range of di fferent sounds has been examined. Although there are aspects of sounds that the model cannot accurately capture, such as polyphony and certain types of fast modulation, the results indicate that high quality synthesis can be achieved for many different acoustic phenomena, including instruments and animal vocalizations. In contrast to many other forms of sound encoding, the parametric model facilitates various techniques of machine learning and intelligent processing, including sound clustering and principal component analysis. Strengths and weaknesses of the proposed method are reviewed, and possibilities for future development are discussed
    corecore