2 research outputs found

    Physical models and musical controllers: designing a novel electronic percussion instrument

    No full text
    A novel electronic percussion synthesizer prototype is presented. Our ambition is to design an instrument that will produce a high quality, realistic sound based on a physical modelling sound synthesis algorithm. This is achieved using a real-time Field Programmable Gate Array (FPGA) implementation of the model coupled to an interface that aims to make efficient use of all the subtle nuanced gestures of the instrumentalist. It is based on a complex physical model of the vibrating plate - the source of sound in the majority of percussion instruments. A Xilinx Virtex II pro FPGA core handles the sound synthesis computations with an 8 billion operations per second performance and has been designed in such a way to allow a high level of control and flexibility. Strategies are also presented to that allow the parametric space of the model to be mapped to the playing gestures of the percussionist

    Real-Time Physically Based Sound Synthesis and Application in Multimodal Interaction

    Get PDF
    An immersive experience in virtual environments requires realistic auditory feedback that is closely coupled with other modalities, such as vision and touch. This is particularly challenging for real-time applications due to its stringent computational requirement. In this dissertation, I present and evaluate effective real-time physically based sound synthesis models that integrate visual and touch data and apply them to create richly varying multimodal interaction. I first propose an efficient contact sound synthesis technique that accounts for texture information used for visual rendering and greatly reinforces cross-modal perception. Secondly, I present both empirical and psychoacoustic approaches that formally study the geometry-invariant property of the commonly used material model in real-time sound synthesis. Based on this property, I design a novel example-based material parameter estimation framework that automatically creates synthetic sound effects naturally controlled by complex geometry and dynamics in visual simulation. Lastly, I translate user touch input captured on commodity multi-touch devices to physical performance models that drive both visual and auditory rendering. This novel multimodal interaction is demonstrated in a virtual musical instrument application on both a large-size tabletop and mobile tablet devices, and evaluated through pilot studies. Such an application offers capabilities for intuitive and expressive music playing, rapid prototyping of virtual instruments, and active exploration of sound effects determined by various physical parameters.Doctor of Philosoph
    corecore