660 research outputs found

    Real-time sound synthesis for paper material based on geometric analysis

    Get PDF
    International audienceIn this article, we present the first method to generate plausible sounds while animating crumpling virtual paper in real time. Our method handles shape-dependent friction and crumpling sounds which typically occur when manipulating or creasing paper by hand. Based on a run-time geometric analysis of the deforming surface, we identify resonating regions characterizing the sound being produced. Coupled to a fast analysis of the surrounding elements, the sound can be efficiently spatialized to take into account nearby wall or table reflectors. Finally, the sound is synthesized in real time using a pre-recorded database of frequency- and time-domain sound sources. Our synthesized sounds are evaluated by comparing them to recordings for a specific set of paper deformations

    Real-time sound synthesis for paper material based on geometric analysis

    No full text
    International audienceIn this article, we present the first method to generate plausible sounds while animating crumpling virtual paper in real time. Our method handles shape-dependent friction and crumpling sounds which typically occur when manipulating or creasing paper by hand. Based on a run-time geometric analysis of the deforming surface, we identify resonating regions characterizing the sound being produced. Coupled to a fast analysis of the surrounding elements, the sound can be efficiently spatialized to take into account nearby wall or table reflectors. Finally, the sound is synthesized in real time using a pre-recorded database of frequency- and time-domain sound sources. Our synthesized sounds are evaluated by comparing them to recordings for a specific set of paper deformations

    Interacting with Acoustic Simulation and Fabrication

    Full text link
    Incorporating accurate physics-based simulation into interactive design tools is challenging. However, adding the physics accurately becomes crucial to several emerging technologies. For example, in virtual/augmented reality (VR/AR) videos, the faithful reproduction of surrounding audios is required to bring the immersion to the next level. Similarly, as personal fabrication is made possible with accessible 3D printers, more intuitive tools that respect the physical constraints can help artists to prototype designs. One main hurdle is the sheer amount of computation complexity to accurately reproduce the real-world phenomena through physics-based simulation. In my thesis research, I develop interactive tools that implement efficient physics-based simulation algorithms for automatic optimization and intuitive user interaction.Comment: ACM UIST 2017 Doctoral Symposiu

    Interactive procedural simulation of paper tearing with sound

    Get PDF
    International audienceWe present a phenomenological model for the real-time simulation of paper tearing and sound. The model uses as input rotations of the hand along with the index and thumb of left and right hands to drive the position and orientation of two regions of a sheet of paper. The motion of the hands produces a cone shaped deformation of the paper and guides the formation and growth of the tear. We create a model for the direction of the tear based on empirical observation, and add detail to the tear with a directed noise model. Furthermore, we present a procedural sound synthesis method to produce tearing sounds during interaction. We show a variety of paper tearing examples and discuss applications and limitations

    Exploring sonic interaction design and presence: Natural Interactive Walking in Porto

    Get PDF

    Sketching sonic interactions by imitation-driven sound synthesis

    Get PDF
    Sketching is at the core of every design activity. In visual design, pencil and paper are the preferred tools to produce sketches for their simplicity and immediacy. Analogue tools for sonic sketching do not exist yet, although voice and gesture are embodied abilities commonly exploited to communicate sound concepts. The EU project SkAT-VG aims to support vocal sketching with computeraided technologies that can be easily accessed, understood and controlled through vocal and gestural imitations. This imitation-driven sound synthesis approach is meant to overcome the ephemerality and timbral limitations of human voice and gesture, allowing to produce more refined sonic sketches and to think about sound in a more designerly way. This paper presents two main outcomes of the project: The Sound Design Toolkit, a palette of basic sound synthesis models grounded on ecological perception and physical description of sound-producing phenomena, and SkAT-Studio, a visual framework based on sound design workflows organized in stages of input, analysis, mapping, synthesis, and output. The integration of these two software packages provides an environment in which sound designers can go from concepts, through exploration and mocking-up, to prototyping in sonic interaction design, taking advantage of all the possibilities of- fered by vocal and gestural imitations in every step of the process

    An Acoustic Wind Machine and its Digital Counterpart : Initial Audio Analysis and Comparison

    Get PDF
    As part of an investigation into the potential of historical theatre sound effects as a resource for Sonic Interaction Design (SID), an acoustic theatre wind machine was constructed and analysed as an interactive sounding object. Using the Sound Designer’s Toolkit (SDT), a digital, physical modelling- based version of the wind machine was programmed, and the acoustic device fitted with a sensor system to control the digital model. This paper presents an initial comparison between the sound output of the acoustic theatre wind machine and its digital counterpart. Three simple and distinct rotational gestures are chosen to explore the main acoustic parameters of the output of the wind machine in operation: a single rotation; a short series of five rotations to create a sustained sound; and a longer series of ten rotations that start at speed and diminish in energy. These gestures are performed, and the resulting acoustic and digital sounds recorded simultaneously, facilitating an analysis of the temporal and spectral domain in Matlab of the same real-time performance of both sources. The results are reported, and a discussion of how they inform further calibration of the real-time synthesis system is presented

    Synthèse de son de papier adaptée au mouvement et à la géométrie de la surface

    No full text
    National audienceNous présentons une méthode pour générer en temps réel un son plausible pour une animation d'un papier virtuel que l'on froisse. Pour cela, nous analysons l'animation géométrique de la surface du papier pour détecter les événements à l'origine de sons puis calculons géométriquement les zones du papier qui vibrent de part la propagation des ondes au travers de la surface. Le son résultant est ensuite synthétisé à partir à la fois d'extraits pré-enregistrés, et d'une synthèse procédurale, tenant compte de la forme géométrique de la surface et de sa dynamique. Nous validons nos résultats en comparant le son généré par notre modèle virtuel par rapport à des enregistrements réels pour un ensemble de cas d'animations caractéristiques. Abstract In this article, we present a method to generate plausible sounds for an animation of crumpling paper in real-time. We analyse the geometrical animation of the deformed surface to detect sound-producing events and compute the regions which resonate due to the propagation of the vibrations though the paper. The resulting sound is synthesized from both pre-recorded sounds and procedural generation taking into account the geometry of the surface and its dynamic. Our results are validated by comparing the generated sound of the virtual model with respect to real recording for a set of specific deformations

    Visually Indicated Sounds

    Get PDF
    Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions
    • …
    corecore