19 research outputs found
The IEM-cube
Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.Traditional multichannel-reproduction systems are mainly used for recreation of pantophonic sound elds. Fully periphonic reproduction has been limited by computational power to manipulate large numbers of audio-channels as well as needed speaker-layouts. Since in the last years digital hardware has become fast enough to meet the computational requirements, a medium-sized concert-hall for reproduction of periphonic electro-acoustic music, the so called IEM-Cube, has been installed at the the IEM. The room is equipped with a hemisphere consisting of 24 loudspeakers, that allows reproduction of three-dimensional sound elds following ambisonic principles of at least 3rd order. To make use of this, a linear 3Dmixing system on PC-basis has been developed. The system may be used as a production-tool for periphonic mixing into a set of ambisonic-channels, as a reproduction-environment for recreating a 3D-sound eld out of such set of ambisonic-encoded channels, and as a live-instrument that allows free positioning and movement of a number of virtual sources in real-time
Sonic Interaction Design: New Applications and Challenges for Interactive Sonification
Hermann T. Sonic Interaction Design: New Applications and Challenges for Interactive Sonification. In: Alois S, Pomberger H, Zotter F, eds. Proceedings of the 13th International Conference on Digital Audio Effects (DAFx-10). Graz, Austria: IEM; 2010: 1-2.Sonic Interaction Design (SID) is the exploitation of sound as a principal channel to convey information, meaning as well as aesthetic and emotional qualities in interactive contexts [1]. SID is a new young research field that offers novel perspectives for interactive artefacts and multimodal user interfaces that use sound at the core of their designs as means to interact with the user or to communicate and express specific facets. The COST Action IC0601 SID investigates the various aspects of sonic interaction design with the focus on (a) perception, cognition and emotion, (b) product design, (c) interactive art and (d) sonification and information display. This talk will provide an overview of SID, present examples and design procedures that take sound, its synthesis and generation, as well as our modes of communication about sound serious. Sonification is the data-dependent, reproducible generation of sound using a systematic transformation, and it is a central component to shape the functional aspect of interactive artefacts [2]
A 3D real time rendering engine for binaural sound reproduction
Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.A method of computationally efficient 3D sound reproduction via headphones is presented using a virtual Ambisonic approach. Previous studies have shown that incorporating head tracking as well as room simulation is important to improve sound source localization capabilities. The simulation of virtual acoustic space requires to filter the stimuli with head related transfer functions (HRTFs). In time-varying systems this yields the problem of high quality interpolation between different HRTFs. The proposed model states that encoding signals into Ambisonic domain results in time-invariant HRTF filters. The proposed system is implemented on a usual notebook using Pure Data (PD), a graphically based open source real time computer music software
Report on the In-vehicle Auditory Interactions Workshop: Taxonomy, Challenges, and Approaches
Jeon M, Hermann T, Bazilinskyy P, et al. Report on the In-vehicle Auditory Interactions Workshop: Taxonomy, Challenges, and Approaches. In: Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - Automotive'UI 15. 2015: 1-5.As driving is mainly a visual task, auditory displays play a critical role for in-vehicle interactions.To improve in-vehicle auditory interactions to the advanced level, auditory display researchers and automotive user interface researchers came together to discuss this timely topic at an in-vehicle auditory interactions workshop at the International Conference on Auditory Display (ICAD).The present paper reports discussion outcomes from the workshop for more discussions at the AutoUI conference
Horizontal and Vertical Voice Directivity Characteristics of Sung Vowels in Classical Singing
Singing voice directivity for five sustained German vowels /a:/, /e:/, /i:/, /o:/, /u:/ over a wide pitch range was investigated using a multichannel microphone array with high spatial resolution along the horizontal and vertical axes. A newly created dataset allows to examine voice directivity in classical singing with high resolution in angle and frequency. Three voice production modes (phonation modes) modal, breathy, and pressed that could affect the used mouth opening and voice directivity were investigated. We present detailed results for singing voice directivity and introduce metrics to discuss the differences of complex voice directivity patterns of the whole data in a more compact form. Differences were found between vowels, pitch, and gender (voice types with corresponding vocal range). Differences between the vowels /a:, e:, i:/ and /o:, u:/ and pitch can be addressed by simplified metrics up to about d2/D5/587 Hz, but we found that voice directivity generally depends strongly on pitch. Minor differences were found between voice production modes and found to be more pronounced for female singers. Voice directivity differs at low pitch between vowels with front vowels being most directional. We found that which of the front vowels is most directional depends on the evaluated pitch. This seems to be related to the complex radiation pattern of the human voice, which involves a large inter-subjective variability strongly influenced by the shape of the torso, head, and mouth. All recorded classical sung vowels at high pitches exhibit similar high directionality
Horizontal and Vertical Voice Directivity Characteristics of Sung Vowels in Classical Singing
Singing voice directivity for five sustained German vowels /a:/, /e:/, /i:/, /o:/, /u:/ over a wide pitch range was investigated using a multichannel microphone array with high spatial resolution along the horizontal and vertical axes. A newly created dataset allows to examine voice directivity in classical singing with high resolution in angle and frequency. Three voice production modes (phonation modes) modal, breathy, and pressed that could affect the used mouth opening and voice directivity were investigated. We present detailed results for singing voice directivity and introduce metrics to discuss the differences of complex voice directivity patterns of the whole data in a more compact form. Differences were found between vowels, pitch, and gender (voice types with corresponding vocal range). Differences between the vowels /a:, e:, i:/ and /o:, u:/ and pitch can be addressed by simplified metrics up to about d2/D5/587 Hz, but we found that voice directivity generally depends strongly on pitch. Minor differences were found between voice production modes and found to be more pronounced for female singers. Voice directivity differs at low pitch between vowels with front vowels being most directional. We found that which of the front vowels is most directional depends on the evaluated pitch. This seems to be related to the complex radiation pattern of the human voice, which involves a large inter-subjective variability strongly influenced by the shape of the torso, head, and mouth. All recorded classical sung vowels at high pitches exhibit similar high directionality