186,341 research outputs found
Investigating sound intensity gradients as feedback for embodied learning
This paper explores an intensity-based approach to sound feedback in systems for embodied learning. We describe a theoretical framework, design guidelines, and the implementation of and results from an informant workshop. The specific context of embodied activity is considered in light of the challenges of designing meaningful sound feedback, and a design approach is shown to be a generative way of uncovering significant sound design patterns. The exploratory workshop offers preliminary directions and design guidelines for using intensity-based ambient sound display in interactive learning environments. The value of this research is in its contribution towards the development of a cohesive and ecologically valid model for using audio feedback in systems, which can guide embodied interaction. The approach presented here suggests ways that multi-modal auditory feedback can support interactive collaborative learning and problem solving
Sketching sonic interactions by imitation-driven sound synthesis
Sketching is at the core of every design activity. In visual design, pencil and paper are the preferred tools to produce sketches for their simplicity and immediacy. Analogue tools for sonic sketching do not exist yet, although voice and gesture are embodied abilities commonly exploited to communicate sound concepts. The EU project SkAT-VG aims to support vocal sketching with computeraided technologies that can be easily accessed, understood and controlled through vocal and gestural imitations. This imitation-driven sound synthesis approach is meant to overcome the ephemerality and timbral limitations of human voice and gesture, allowing to produce more refined sonic sketches and to think about sound in a more designerly way. This paper presents two main outcomes of the project: The Sound Design Toolkit, a palette of basic sound synthesis models grounded on ecological perception and physical description of sound-producing phenomena, and SkAT-Studio, a visual framework based on sound design workflows organized in stages of input, analysis, mapping, synthesis, and output. The integration of these two software packages provides an environment in which sound designers can go from concepts, through exploration and mocking-up, to prototyping in sonic interaction design, taking advantage of all the possibilities of- fered by vocal and gestural imitations in every step of the process
Embodied gestures
This is a book about musical gestures: multiple ways to design instruments, compose musical performances, analyze sound objects and represent sonic ideas through the central notion of ‘gesture’.
The writers share knowledge on major research projects, musical compositions and methodological tools developed among different disciplines, such as sound art, embodied music cognition, human-computer interaction, performative studies and artificial intelligence. They visualize how similar and compatible are the notions of embodied music cognition and the artistic discourses proposed by musicians working with ‘gesture’ as their compositional material.
The authors and editors hope to contribute to the ongoing discussion around creative technologies and music, expressive musical interface design, the debate around the use of AI technology in music practice, as well as presenting a new way of thinking about musical instruments, composing and performing with them
Investigating Sound Intensity Gradients as Feedback for Embodied Learning
This paper explores an intensity-based approach to sound feedback in systems for embodied learning. We describe a theoretical framework, design guidelines, and the implementation of and results from an informant workshop. The specific context of embodied activity is considered in light of the challenges of designing meaningful sound feedback, and a design approach is shown to be a generative way of uncovering significant sound design patterns. The exploratory workshop offers preliminary directions and design guidelines for using intensity-based ambient sound display in interactive learning environments. The value of this research is in its contribution towards the development of a cohesive and ecologically valid model for using audio feedback in systems, which can guide embodied interaction. The approach presented here suggests ways that multi-modal auditory feedback can support interactive collaborative learning and problem solving
Building the Knowledge of Human Perception into E-Learning
Human perceptual systems—for sight, sound, taste, touch, and smell (and maybe even embodied proprioception)—may offer some guidelines for how to build multimedia e-learning: immersive 3D simulations, imagery for analysis, sight-and-sound distributions of information channels, and other applications. This will offer a brief overview of human perception (with a little human cognition thrown in) and some light applications to the design of e-learning
Embodied gestures
This is a book about musical gestures: multiple ways to design instruments, compose musical performances, analyze sound objects and represent sonic ideas through the central notion of ‘gesture’.
The writers share knowledge on major research projects, musical compositions and methodological tools developed among different disciplines, such as sound art, embodied music cognition, human-computer interaction, performative studies and artificial intelligence. They visualize how similar and compatible are the notions of embodied music cognition and the artistic discourses proposed by musicians working with ‘gesture’ as their compositional material.
The authors and editors hope to contribute to the ongoing discussion around creative technologies and music, expressive musical interface design, the debate around the use of AI technology in music practice, as well as presenting a new way of thinking about musical instruments, composing and performing with them
Enactive Sound Machines: Theatrical Strategies for Sonic Interaction Design
Embodied interaction with digital sound has been subject to much prior research, but a method of coupling simple and intuitive hand actions to the vast potential of digital soundmaking in a perceptually meaningful way remains elusive. At the same time, artistic practices centred on performative soundmaking with objects remain overlooked by researchers. This thesis explores the design and performance of theatre sound effects in Europe and the U.S. in the late nineteenth and early twentieth century in order to converge the embodied knowledge of soundmaking at the heart of this historical practice with present-day design and evaluation strategies from Sonic Interaction Design and Digital Musical Instrument design.
An acoustic theatre wind machine is remade and explored as an interactive sounding object facilitating a continuous sonic interaction with a wind-like sound. Its main soundmaking components are digitally modelled in Max/MSP. A prototype digital wind machine is created by fitting the acoustic wind machine with a rotary encoder to activate the digital wind-like sound in performance. Both wind machines are then evaluated in an experiment with participants. The results show that the timbral qualities of the wind-like sounds are the most important factor in how they are rated for similarity, that the rotational speed of both wind machines is not clearly perceivable from their sounds, and that the enactive properties of the acoustic wind machine have not yet been fully captured in the digital prototype. The wind machine’s flywheel mechanism is also found to be influential in guiding participants in their performances. The findings confirm the acoustic wind machine’s ability to facilitate enactive learning, and a more complete picture of its soundmaking components emerges. The work presented in this thesis opens up the potential of mechanisms to couple simple hand actions to complex soundmaking, whether acoustic or digital, in an intuitive way
ARTIFICIAL INTELLIGENCE-BASED APPROACH TO MODELLING OF PIPE ORGANS
The aim of the project was to develop a new Artificial Intelligence-based method to aid
modeling of musical instruments and sound design. Despite significant advances in music
technology, sound design and synthesis of complex musical instruments is still time
consuming, error prone and requires expert understanding of the instrument attributes
and significant expertise to produce high quality synthesised sounds to meet the needs
of musicians and musical instrument builders. Artificial Intelligence (Al) offers an effective
means of capturing this expertise and for handling the imprecision and uncertainty
inherent in audio knowledge and data.
This thesis presents new techniques to capture and exploit audio expertise, following
extended knowledge elicitation with two renowned music technologist/audio experts, developed
and embodied into an intelligent audio system. The Al combined with perceptual
auditory modeling ba.sed techniques (ITU-R BS 1387) make a generic modeling framework
providing a robust methodology for sound synthesis parameters optimisation with
objective prediction of sound synthesis quality. The evaluation, carried out using typical
pipe organ sounds, has shown that the intelligent audio system can automatically design
sounds judged by the experts to be of very good quality, while significantly reducing the
expert's work-load by up to a factor of three and need for extensive subjective tests.
This research work, the first initiative to capture explicitly knowledge from audio
experts for sound design, represents an important contribution for future design of electronic
musical instruments based on perceptual sound quality will help to develop a new
sound quality index for benchmarking sound synthesis techniques and serve as a research
framework for modeling of a wide range of musical instruments.Musicom Lt
Audio-based narratives for the trenches of World War I : intertwining stories, places and interaction for an evocative experience
We report in detail the co-design, setup and evaluation of a technological intervention for a complex outdoor heritage site: a World War I fortified camp and trenches located in the natural setting of the Italian Alps. Sound was used as the only means of content delivery as it was considered particularly effective in engaging visitors at an emotional level and had the potential to enhance the physical experience of being at an historical place. The implemented prototype is visitor-aware personalised multi-point auditory narrative system that automatically plays sounds and stories depending on a combination of features such as physical location, visitor proximity and visitor preferences. The curators created for the trail multiple narratives to capture the different voices of the War. The stories are all personal accounts (as opposed to objective and detached reporting of the facts); they were designed to trigger empathy and understanding while leaving the visitors free to interpret the content and the place on the bases of their own understanding and sensitivity. The result is an evocative embodied experience that does not describe the place in a traditional sense, but leaves its interpretation open. It takes visitors beyond the traditional view of heritage as a source of information toward a sensorial experience of feeling the past. A prototype was set up and tested with a group of volunteers showing that a design that carefully combines content design, sound design, tangible and embodied interaction can bring archaeological remains, with very little to see, back to file
- …