483 research outputs found

    Online Correction of Dispersion Error in 2D Waveguide Meshes

    Full text link
    An elastic ideal 2D propagation medium, i.e., a membrane, can be simulated by models discretizing the wave equation on the time-space grid (finite difference methods), or locally discretizing the solution of the wave equation (waveguide meshes). The two approaches provide equivalent computational structures, and introduce numerical dispersion that induces a misalignment of the modes from their theoretical positions. Prior literature shows that dispersion can be arbitrarily reduced by oversizing and oversampling the mesh, or by adpting offline warping techniques. In this paper we propose to reduce numerical dispersion by embedding warping elements, i.e., properly tuned allpass filters, in the structure. The resulting model exhibits a significant reduction in dispersion, and requires less computational resources than a regular mesh structure having comparable accuracy.Comment: 4 pages, 5 figures, to appear in the Proceedings of the International Computer Music Conference, 2000. Corrected first referenc

    A digital waveguide-based approach for Clavinet modeling and synthesis

    Get PDF
    The Clavinet is an electromechanical musical instrument produced in the mid-twentieth century. As is the case for other vintage instruments, it is subject to aging and requires great effort to be maintained or restored. This paper reports analyses conducted on a Hohner Clavinet D6 and proposes a computational model to faithfully reproduce the Clavinet sound in real time, from tone generation to the emulation of the electronic components. The string excitation signal model is physically inspired and represents a cheap solution in terms of both computational resources and especially memory requirements (compared, e.g., to sample playback systems). Pickups and amplifier models have been implemented which enhance the natural character of the sound with respect to previous work. A model has been implemented on a real-time software platform, Pure Data, capable of a 10-voice polyphony with low latency on an embedded device. Finally, subjective listening tests conducted using the current model are compared to previous tests showing slightly improved results

    A hybrid keyboard-guitar interface using capacitive touch sensing and physical modeling

    Get PDF
    This paper was presented at the 9th Sound and Music Computing Conference, Copenhagen, Denmark.This paper presents a hybrid interface based on a touch- sensing keyboard which gives detailed expressive control over a physically-modeled guitar. Physical modeling al- lows realistic guitar synthesis incorporating many expres- sive dimensions commonly employed by guitarists, includ- ing pluck strength and location, plectrum type, hand damp- ing and string bending. Often, when a physical model is used in performance, most control dimensions go unused when the interface fails to provide a way to intuitively con- trol them. Techniques as foundational as strumming lack a natural analog on the MIDI keyboard, and few digital controllers provide the independent control of pitch, vol- ume and timbre that even novice guitarists achieve. Our interface combines gestural aspects of keyboard and guitar playing. Most dimensions of guitar technique are control- lable polyphonically, some of them continuously within each note. Mappings are evaluated in a user study of key- boardists and guitarists, and the results demonstrate its playa- bility by performers of both instruments

    Re-Sonification of Objects, Events, and Environments

    Get PDF
    abstract: Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.Dissertation/ThesisPh.D. Electrical Engineering 201

    Physical Interactions with Digital Strings - A hybrid approach to a digital keyboard instrument

    Get PDF
    A new hybrid approach to digital keyboard playing is presented, where the actual acoustic sounds from a digital keyboard are captured with contact microphones and applied as excitation signals to a digital model of a prepared piano, i.e., an extended wave-guide model of strings with the possibility of stopping and muting the strings at arbitrary positions. The parameters of the string model are controlled through TouchKeys multitouch sensors on each key, combined with MIDI data and acoustic signals from the digital keyboard frame, using a novel mapping. The instrument is evaluated from a performing musician's perspective, and emerging playing techniques are discussed. Since the instrument is a hybrid acoustic-digital system with several feedback paths between the domains, it provides for expressive and dynamic playing, with qualities approaching that of an acoustic instrument, yet with new kinds of control. The contributions are two-fold. First, the use of acoustic sounds from a physical keyboard for excitations and resonances results in a novel hybrid keyboard instrument in itself. Second, the digital model of "inside piano" playing, using multitouch keyboard data, allows for performance techniques going far beyond conventional keyboard playing

    Model-based digital pianos: from physics to sound synthesis

    Get PDF
    International audiencePiano is arguably one of the most important instruments in Western music due to its complexity and versatility. The size, weight, and price of grand pianos, and the relatively simple control surface (keyboard) have lead to the development of digital counterparts aiming to mimic the sound of the acoustic piano as closely as possible. While most commercial digital pianos are based on sample playback, it is also possible to reproduce the sound of the piano by modeling the physics of the instrument. The process of physical modeling starts with first understanding the physical principles, then creating accurate numerical models, and finally finding numerically optimized signal processing models that allow sound synthesis in real time by neglecting inaudible phenomena, and adding some perceptually important features by signal processing tricks. Accurate numerical models can be used by physicists and engineers to understand the functioning of the instrument, or to help piano makers in instrument development. On the other hand, efficient real-time models are aimed at composers and musicians performing at home or at stage. This paper will overview physics-based piano synthesis starting from the computationally heavy, physically accurate approaches and then discusses the ones that are aimed at best possible sound quality in real-time synthesis
    • …
    corecore