88 research outputs found

    Interactive procedural simulation of paper tearing with sound

    Get PDF
    International audienceWe present a phenomenological model for the real-time simulation of paper tearing and sound. The model uses as input rotations of the hand along with the index and thumb of left and right hands to drive the position and orientation of two regions of a sheet of paper. The motion of the hands produces a cone shaped deformation of the paper and guides the formation and growth of the tear. We create a model for the direction of the tear based on empirical observation, and add detail to the tear with a directed noise model. Furthermore, we present a procedural sound synthesis method to produce tearing sounds during interaction. We show a variety of paper tearing examples and discuss applications and limitations

    A Study of Material Sonification in Touchscreen Devices

    Full text link
    Even in the digital age, designers largely rely on physical material samples to illustrate their products, as existing visual representations fail to sufficiently reproduce the look and feel of real world materials. Here, we investigate the use of interactive material sonification as an additional sensory modality for communicating well-established material qualities like softness, pleasantness or value. We developed a custom application for touchscreen devices that receives tactile input and translate it into material rubbing sound using granular synthesis. We used this system to perform a psychophysical study, in which the ability of the user to rate subjective material qualities is evaluated, with the actual material samples serving as reference stimulus. Our experimental results indicate that the considered audio cues do not significantly contribute to the perception of material qualities but are able to increase the level of immersion when interacting with digital samples.Comment: 9 page

    Real-time sound synthesis for paper material based on geometric analysis

    Get PDF
    International audienceIn this article, we present the first method to generate plausible sounds while animating crumpling virtual paper in real time. Our method handles shape-dependent friction and crumpling sounds which typically occur when manipulating or creasing paper by hand. Based on a run-time geometric analysis of the deforming surface, we identify resonating regions characterizing the sound being produced. Coupled to a fast analysis of the surrounding elements, the sound can be efficiently spatialized to take into account nearby wall or table reflectors. Finally, the sound is synthesized in real time using a pre-recorded database of frequency- and time-domain sound sources. Our synthesized sounds are evaluated by comparing them to recordings for a specific set of paper deformations

    Real-time sound synthesis for paper material based on geometric analysis

    No full text
    International audienceIn this article, we present the first method to generate plausible sounds while animating crumpling virtual paper in real time. Our method handles shape-dependent friction and crumpling sounds which typically occur when manipulating or creasing paper by hand. Based on a run-time geometric analysis of the deforming surface, we identify resonating regions characterizing the sound being produced. Coupled to a fast analysis of the surrounding elements, the sound can be efficiently spatialized to take into account nearby wall or table reflectors. Finally, the sound is synthesized in real time using a pre-recorded database of frequency- and time-domain sound sources. Our synthesized sounds are evaluated by comparing them to recordings for a specific set of paper deformations

    Synthèse de son de papier adaptée au mouvement et à la géométrie de la surface

    Get PDF
    National audienceNous présentons une méthode pour générer en temps réel un son plausible pour une animation d'un papier virtuel que l'on froisse. Pour cela, nous analysons l'animation géométrique de la surface du papier pour détecter les événements à l'origine de sons puis calculons géométriquement les zones du papier qui vibrent de part la propagation des ondes au travers de la surface. Le son résultant est ensuite synthétisé à partir à la fois d'extraits pré-enregistrés, et d'une synthèse procédurale, tenant compte de la forme géométrique de la surface et de sa dynamique. Nous validons nos résultats en comparant le son généré par notre modèle virtuel par rapport à des enregistrements réels pour un ensemble de cas d'animations caractéristiques. Abstract In this article, we present a method to generate plausible sounds for an animation of crumpling paper in real-time. We analyse the geometrical animation of the deformed surface to detect sound-producing events and compute the regions which resonate due to the propagation of the vibrations though the paper. The resulting sound is synthesized from both pre-recorded sounds and procedural generation taking into account the geometry of the surface and its dynamic. Our results are validated by comparing the generated sound of the virtual model with respect to real recording for a set of specific deformations

    A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis

    Get PDF
    The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morpheme’s user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morpheme’s user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures

    A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis

    Get PDF
    The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morpheme’s user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morpheme’s user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures

    Synthèse de son de papier adaptée au mouvement et à la géométrie de la surface

    No full text
    National audienceNous présentons une méthode pour générer en temps réel un son plausible pour une animation d'un papier virtuel que l'on froisse. Pour cela, nous analysons l'animation géométrique de la surface du papier pour détecter les événements à l'origine de sons puis calculons géométriquement les zones du papier qui vibrent de part la propagation des ondes au travers de la surface. Le son résultant est ensuite synthétisé à partir à la fois d'extraits pré-enregistrés, et d'une synthèse procédurale, tenant compte de la forme géométrique de la surface et de sa dynamique. Nous validons nos résultats en comparant le son généré par notre modèle virtuel par rapport à des enregistrements réels pour un ensemble de cas d'animations caractéristiques. Abstract In this article, we present a method to generate plausible sounds for an animation of crumpling paper in real-time. We analyse the geometrical animation of the deformed surface to detect sound-producing events and compute the regions which resonate due to the propagation of the vibrations though the paper. The resulting sound is synthesized from both pre-recorded sounds and procedural generation taking into account the geometry of the surface and its dynamic. Our results are validated by comparing the generated sound of the virtual model with respect to real recording for a set of specific deformations

    Evaluation of Synthesised Sound Effects

    Get PDF
    PhDThe current fi eld of sound synthesis research presents a range of methods and approaches for synthesising a given sound. Sounds are synthesised to facilitate interaction or control of a sound, to enable sound searching through parametric control of a sound or to allow for the creation of an arti ficial nonexistent sound. In all of these cases, the ability of a synthesis technique to reproduce a desired sound is integral. This thesis uses an audio feature representation of audio to produce a sonically inspired taxonomy, based entirely on the sonic content of sound, which enables a user to search through a large set of sounds without the need for understanding of context. This provides an approach for using audio features to compare similarity between different audio effect samples in a sound effects library. This thesis then develops approaches for evaluation of synthesised sound effects. A large scale methodic subjective evaluation of synthesised sound effects is performed, evaluating a range of different synthesis methods in a range of different sound classes or sonic contexts. It is then identi fied that there are cases where synthesised sound effects can be considered as realistic as a recorded sample. An objective evaluation approach is then presented. Audio feature vectors are used to measure the relative objective similarities between two samples, and this is correlated with a perceptual evaluation of sound similarity. These objective measures are then compared based on the perceptual evaluations. Both evaluation approaches are then demonstrated in a case study of aeroacoustic sound effects, where these subjective and objective evaluation techniques are demonstrated for a speci fic case. There is no single best approach to synthesising sound effects. More consistent and rigorous evaluation methodologies will lead to a better understanding as to the advantages and disadvantages of each method. The outcome of this research suggests that further consistent perceptual and objective evaluation within the sound effect synthesis community will lead to a better understanding as to the successes and failings of existing work and thus facilitate an enhancement of current sound synthesis technologies.This work was supported by the EPSRC grant EP/M506394/1
    • …
    corecore