2,787 research outputs found

    Perception of the object attributes for sound synthesis purposes

    Get PDF
    International audienceThis paper presents a work in progress on the perception of the attributes of the shape of a resonant object. As part of the ecological approach to perception-assuming that a sound contains specific morphologies that convey perceptually relevant information responsible for its recognition, called invariants-the PRISM laboratory has developed an environmental sound synthesizer aiming to provide perceptual and intuitive controls for a non-expert user. Following a brief presentation of the di↵erent strategies for controlling the perceptual attributes of the object, we present an experiment conducted with calibrated sounds generated by a physically-informed synthesis model. This test focuses on the perception of the shape of the object, more particularly its width and thickness since these attributes, especially the thickness, have not been much studied in the literature from a perceptual point of view. The first results show that the perception of width is di cult for listeners, while the perception of thickness is much easier. This study allows us to validate the proposed control strategy. Further works are planned to better characterize the perceptual invariants relevant for shape perception

    Intuitive Control of Scraping and Rubbing Through Audio-tactile Synthesis

    Full text link
    Intuitive control of synthesis processes is an ongoing challenge within the domain of auditory perception and cognition. Previous works on sound modelling combined with psychophysical tests have enabled our team to develop a synthesizer that provides intuitive control of actions and objects based on semantic descriptions for sound sources. In this demo we present an augmented version of the synthesizer in which we added tactile stimulations to increase the sensation of true continuous friction interactions (rubbing and scratching) with the simulated objects. This is of interest for several reasons. Firstly, it enables to evaluate the realism of our sound model in presence of stimulations from other modalities. Secondly it enables to compare tactile and auditory signal structures linked to the same evocation, and thirdly it provides a tool to investigate multimodal perception and how stimulations from different modalities should be combined to provide realistic user interfaces

    A synthesis model with intuitive control capabilities for rolling sounds

    No full text
    International audienceThis paper presents a physically inspired source-filter model for rolling sound synthesis. The model, which is suitable for real-time implementation, is based on qualitative and quantitative observations obtained from a physics-based model described in the literature. In the first part of the paper, the physics-based model is presented, followed by a perceptual experiment, whose aim is to identify the perceptually relevant information characterizing the rolling interaction. On the basis of this experiment, we hypothesize that the particular pattern of the interaction force is responsible for the perception of a rolling object. A complete analysis-synthesis scheme of this interaction force is then provided, along with a description of the calibration of the proposed source-filter sound synthesis process. Finally, a mapping strategy for intuitive control of the proposed synthesis process (i.e. size and velocity of the rolling object and roughness of the surface) is proposed and validated by a listening test

    Singing voice resynthesis using concatenative-based techniques

    Get PDF
    Tese de Doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 201

    Timbre from Sound Synthesis and High-level Control Perspectives

    Get PDF
    International audienceExploring the many surprising facets of timbre through sound manipulations has been a common practice among composers and instrument makers of all times. The digital era radically changed the approach to sounds thanks to the unlimited possibilities offered by computers that made it possible to investigate sounds without physical constraints. In this chapter we describe investigations on timbre based on the analysis by synthesis approach that consists in using digital synthesis algorithms to reproduce sounds and further modify the parameters of the algorithms to investigate their perceptual relevance. In the first part of the chapter timbre is investigated in a musical context. An examination of the sound quality of different wood species for xylophone making is first presented. Then the influence of instrumental control on timbre is described in the case of clarinet and cello performances. In the second part of the chapter, we mainly focus on the identification of sound morphologies, so called invariant sound structures responsible for the evocations induced by environmental sounds by relating basic signal descriptors and timbre descriptors to evocations in the case of car door noises, motor noises, solid objects, and their interactions

    PHYSMISM:a control interface for creative exploration of physical models

    Get PDF
    In this paper we describe the design and implementation ofthe PHYSMISM: an interface for exploring the possibilitiesfor improving the creative use of physical modelling soundsynthesis.The PHYSMISM is implemented in a software and hardware version. Moreover, four different physical modellingtechniques are implemented, to explore the implications ofusing and combining different techniques.In order to evaluate the creative use of physical models,a test was performed using 11 experienced musicians as testsubjects. Results show that the capability of combining thephysical models and the use of a physical interface engagedthe musicians in creative exploration of physical models

    Beyond key velocity: Continuous sensing for expressive control on the Hammond Organ and Digital keyboards

    Get PDF
    In this thesis we seek to explore the potential for continuous key position to be used as an expressive control in keyboard musical instruments, and how preexisting skills can be adapted to leverage this additional control. Interaction between performer and sound generation on a keyboard instrument is often restricted to a number of discrete events on the keys themselves (notes onsets and offsets), while complementary continuous control is provided via additional interfaces, such as pedals, modulation wheels and knobs. The rich vocabulary of gestures that skilled performers can achieve on the keyboard is therefore often simplified to a single, discrete velocity measurement. A limited number of acoustical and electromechanical keyboard instruments do, however, present affordances of continuous key control, so that the role of the key is not limited to delivering discrete events, but its instantaneous position is, to a certain extent, an element of expressive control. Recent evolutions in sensing technologies allow to leverage continuous key position as an expressive element in the sound generation of digital keyboard musical instruments. We start by exploring the expression available on the keys of the Hammond organ, where nine contacts are closed at different points of the key throw for each key onset and we find that the velocity and the percussiveness of the touch affect the way the contacts close and bounce, producing audible differences in the onset transient of each note. We develop an embedded hardware and software environment for low-latency sound generation controlled by continuous key position, which we use to create two digital keyboard instruments. The first of these emulates the sound of a Hammond and can be controlled with continuous key position, so that it allows for arbitrary mapping between the key position and the nine virtual contacts of the digital sound generator. A study with 10 musicians shows that, when exploring the instrument on their own, the players can appreciate the differences between different settings and tend to develop a personal preference for one of them. In the second instrument, continuous key position is the fundamental means of expression: percussiveness, key position and multi-key gestures control the parameters of a physical model of a flute. In a study with 6 professional musicians playing this instrument we gather insights on the adaptation process, the limitations of the interface and the transferability of traditional keyboard playing techniques

    My Physical Approach to Musique Concrete Composition Portfolio of Studio Works

    Get PDF
    My recent practice-based research explores the creative potential of physical manipulation of sound in the composition of sound-based electronic music. Focusing on the poietic aspect of my music making, this commentary discusses the composition process of three musical works: Comme si la foudre pouvait durer, Igaluk - To Scare the Moon with its own Shadow and desert. It also examines the development of a software instrument, fXfD, along with its resulting musical production. Finally, it discusses the recent musical production of an improvisation duet in which I take part, Tout Croche. In the creative process of this portfolio, the appreciation for sound is the catalyst of the musical decisions. In other words, the term \musique concrete" applies to my practice, as sound is the central concern that triggers the composition act. In addition to anecdotal, typo-morphological and functional concerns, the presence of a \trace of physicality" in a sound is, more than ever, what convinces me of its musical potential. In order to compose such sounds, a back-and-forth process between theoretical knowledge and sound manipulations will be defined and developed under the concept of \sonic empiricism." In a desire to break with the cumbersome nature of studio-based composition work, approaches to sound-based electronic music playing were researched. Through the diferent musical projects, various digital instruments were conceived. In a case study, the text reviews them through their sound generation, gestural control and mapping components. I will also state personal preferences in the ways sound manipulations are performed. In the light of the observations made, the studio emerges as the central instrument upon which my research focuses. The variety of resources it provides for the production and control of sound confers the status of polymorphic instrument on the studio. The text concludes by reflecting on the possibilities of improvisation and performance that the studio offers when it is considered as an embodied polymorphic instrument. A concluding statement on the specific ear training needed for such a studio practice bridges the concepts of sound selection and digital instruments herein exposed

    Sonifying drawings: characterization of perceptual attributes of sounds produced by human gestures

    No full text
    International audienceFriction sounds produced by the pencil of a person who is drawing on a paper are audible and could bring information about his/her gestures. This study firstly focuses on the perceptual significance of the morphology of such sounds, and to what extent gestures could be retrieved by sounds. Sounds recorded during drawing sessions were used in association tests where subjects had to univocally associate friction sounds with different shapes. Results showed that subjects were able to associate sounds with correct shapes and that the auditory characterization of the shape depended on the velocity profile.\\ A sonification strategy was then proposed for human drawings using a friction sound synthesis model. Inspired by the work of Viviani et al. (1982) who exhibited a 2/3-power law relation between kinetics and shape curvature for visual perception, experiments were carried out with the synthesis model, where subjects were asked to adjust the power law exponent so that the most realistic sound was obtained. Results revealed an exponent close to 2/3 as previously found in vision and thus highlighted a similar power law in the auditory modality, providing an ecological way to determine velocity profiles from static shapes and to generate sounds coherent with human gestures

    Navigating in a space of synthesized interaction-sounds: rubbing, scratching and rolling sounds

    No full text
    International audienceIn this paper, we investigate a control strategy of synthesized interaction-sounds. The framework of our research is based on the {action/object} paradigm that considers that sounds result from an action on an object. This paradigm presumes that there exists some sound invariants, i.e. perceptually relevant signal morphologies that carry information about the action or the object. Some of these auditory cues are considered for rubbing, scratching and rolling interactions. A generic sound synthesis model, allowing the production of these three types of interaction together with a control strategy of this model are detailed. The proposed control strategy allows the users to navigate continuously in an ''action space'', and to morph between interactions, e.g. from rubbing to rolling
    corecore