1,318 research outputs found

    Sketching sonic interactions by imitation-driven sound synthesis

    Get PDF
    Sketching is at the core of every design activity. In visual design, pencil and paper are the preferred tools to produce sketches for their simplicity and immediacy. Analogue tools for sonic sketching do not exist yet, although voice and gesture are embodied abilities commonly exploited to communicate sound concepts. The EU project SkAT-VG aims to support vocal sketching with computeraided technologies that can be easily accessed, understood and controlled through vocal and gestural imitations. This imitation-driven sound synthesis approach is meant to overcome the ephemerality and timbral limitations of human voice and gesture, allowing to produce more refined sonic sketches and to think about sound in a more designerly way. This paper presents two main outcomes of the project: The Sound Design Toolkit, a palette of basic sound synthesis models grounded on ecological perception and physical description of sound-producing phenomena, and SkAT-Studio, a visual framework based on sound design workflows organized in stages of input, analysis, mapping, synthesis, and output. The integration of these two software packages provides an environment in which sound designers can go from concepts, through exploration and mocking-up, to prototyping in sonic interaction design, taking advantage of all the possibilities of- fered by vocal and gestural imitations in every step of the process

    To “Sketch-a-Scratch”

    Get PDF
    A surface can be harsh and raspy, or smooth and silky, and everything in between. We are used to sense these features with our fingertips as well as with our eyes and ears: the exploration of a surface is a multisensory experience. Tools, too, are often employed in the interaction with surfaces, since they augment our manipulation capabilities. “Sketch-a-Scratch” is a tool for the multisensory exploration and sketching of surface textures. The user’s actions drive a physical sound model of real materials’ response to interactions such as scraping, rubbing or rolling. Moreover, different input signals can be converted into 2D visual surface profiles, thus enabling to experience them visually, aurally and haptically

    The Sound Design Toolkit

    Get PDF
    The Sound Design Toolkit is a collection of physically informed sound synthesis models, specifically designed for practice and research in Sonic Interaction Design. The collection is based on a hierarchical, perceptually founded taxonomy of everyday sound events, and implemented by procedural audio algorithms which emphasize the role of sound as a process rather than a product. The models are intuitive to control \u2013 and the resulting sounds easy to predict \u2013 as they rely on basic everyday listening experience. Physical descriptions of sound events are intentionally simplified to emphasize the most perceptually relevant timbral features, and to reduce computational requirements as well

    Visualising Music with Impromptu

    Get PDF
    This paper discusses our experiments with a method of creating visual representations of music using a graphical library for Impromptu that emulates and builds on Logo’s turtle graphics. We explore the potential and limitations of this library for visualising music, and describe some ways in which this simple system can be utilised to assist the musician by revealing musical structure are demonstrated

    To ‘Sketch-a-Scratch’

    Get PDF
    A surface can be harsh and raspy, or smooth and silky, and everything in between. We are used to sense these features with our fingertips as well as with our eyes and ears: the exploration of a surface is a multisensory experience. Tools, too, are often employed in the interaction with surfaces, since they augment our manipulation capabilities. “Sketch-a-Scratch” is a tool for the multisensory exploration and sketching of surface textures. The user’s actions drive a physical sound model of real materials’ response to interactions such as scraping, rubbing or rolling. Moreover, different input signals can be converted into 2D visual surface profiles, thus enabling to experience them visually, aurally and haptically

    An Acoustic Wind Machine and its Digital Counterpart : Initial Audio Analysis and Comparison

    Get PDF
    As part of an investigation into the potential of historical theatre sound effects as a resource for Sonic Interaction Design (SID), an acoustic theatre wind machine was constructed and analysed as an interactive sounding object. Using the Sound Designer’s Toolkit (SDT), a digital, physical modelling- based version of the wind machine was programmed, and the acoustic device fitted with a sensor system to control the digital model. This paper presents an initial comparison between the sound output of the acoustic theatre wind machine and its digital counterpart. Three simple and distinct rotational gestures are chosen to explore the main acoustic parameters of the output of the wind machine in operation: a single rotation; a short series of five rotations to create a sustained sound; and a longer series of ten rotations that start at speed and diminish in energy. These gestures are performed, and the resulting acoustic and digital sounds recorded simultaneously, facilitating an analysis of the temporal and spectral domain in Matlab of the same real-time performance of both sources. The results are reported, and a discussion of how they inform further calibration of the real-time synthesis system is presented

    A design exploration on the effectiveness of vocal imitations

    Get PDF
    Among sonic interaction design practices a rising interest is given to the use of the voice as a tool for producing fast and rough sketches. Goal of the EU project SkAT-VG (Sketching Audio Technologies using Vocalization and Gestures, 2014-2016) is to develop vocal sketching as a reference practice for sound design by (i) improving our understanding on how sounds are communicated through vocalizations and gestures, (ii) looking for physical relations between vocal sounds and sound-producing phenomena, (iii) designing tools for converting vocalizations and gestures into parametrized sound models. We present the preliminary outcomes of a vocal sketching workshop held at the Conservatory of Padova, Italy. Research through design activities focused on how teams of potential designers make use of vocal imitations, and how morphological attributes of sound may inform the training of basic vocal techniques

    AVUI: Designing a toolkit for audiovisual interfaces

    Get PDF
    The combined use of sound and image has a rich history, from audiovisual artworks to research exploring the potential of data visualization and sonification. However, we lack standard tools or guidelines for audiovisual (AV) interaction design, particularly for live performance. We propose the AVUI (AudioVisual User Interface), where sound and image are used together in a cohesive way in the interface; and an enabling technology, the ofxAVUI toolkit. AVUI guidelines and ofxAVUI were developed in a three-stage process, together with AV producers: 1) participatory design activities; 2) prototype development; 3) encapsulation of prototype as a plug-in, evaluation, and roll out. Best practices identified include: reconfigurable interfaces and mappings; object-oriented packaging of AV and UI; diverse sound visualization; flexible media manipulation and management. The toolkit and a mobile app developed using it have been released as open-source. Guidelines and toolkit demonstrate the potential of AVUI and offer designers a convenient framework for AV interaction design

    To Sketch-a-Scratch

    Get PDF
    A surface can be harsh and raspy, or smooth and silky, and everything in between. We are used to sense these features with our fingertips as well as with our eyes and ears: the exploration of a surface is a multisensory experience. Tools, too, are often employed in the interaction with surfaces, since they augment our manipulation capabilities. “Sketch-a-Scratch” is a tool for the multisensory exploration and sketching of surface textures. The user’s actions drive a physical sound model of real materials’ response to interactions such as scraping, rubbing or rolling. Moreover, different input signals can be converted into 2D visual surface profiles, thus enabling to experience them visually, aurally and haptically
    • 

    corecore