16 research outputs found

    The IEM-cube

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.Traditional multichannel-reproduction systems are mainly used for recreation of pantophonic sound elds. Fully periphonic reproduction has been limited by computational power to manipulate large numbers of audio-channels as well as needed speaker-layouts. Since in the last years digital hardware has become fast enough to meet the computational requirements, a medium-sized concert-hall for reproduction of periphonic electro-acoustic music, the so called IEM-Cube, has been installed at the the IEM. The room is equipped with a hemisphere consisting of 24 loudspeakers, that allows reproduction of three-dimensional sound elds following ambisonic principles of at least 3rd order. To make use of this, a linear 3Dmixing system on PC-basis has been developed. The system may be used as a production-tool for periphonic mixing into a set of ambisonic-channels, as a reproduction-environment for recreating a 3D-sound eld out of such set of ambisonic-encoded channels, and as a live-instrument that allows free positioning and movement of a number of virtual sources in real-time

    Sonic Interaction Design: New Applications and Challenges for Interactive Sonification

    Get PDF
    Hermann T. Sonic Interaction Design: New Applications and Challenges for Interactive Sonification. In: Alois S, Pomberger H, Zotter F, eds. Proceedings of the 13th International Conference on Digital Audio Effects (DAFx-10). Graz, Austria: IEM; 2010: 1-2.Sonic Interaction Design (SID) is the exploitation of sound as a principal channel to convey information, meaning as well as aesthetic and emotional qualities in interactive contexts [1]. SID is a new young research field that offers novel perspectives for interactive artefacts and multimodal user interfaces that use sound at the core of their designs as means to interact with the user or to communicate and express specific facets. The COST Action IC0601 SID investigates the various aspects of sonic interaction design with the focus on (a) perception, cognition and emotion, (b) product design, (c) interactive art and (d) sonification and information display. This talk will provide an overview of SID, present examples and design procedures that take sound, its synthesis and generation, as well as our modes of communication about sound serious. Sonification is the data-dependent, reproducible generation of sound using a systematic transformation, and it is a central component to shape the functional aspect of interactive artefacts [2]

    Horizontal and Vertical Voice Directivity Characteristics of Sung Vowels in Classical Singing

    No full text
    Singing voice directivity for five sustained German vowels /a:/, /e:/, /i:/, /o:/, /u:/ over a wide pitch range was investigated using a multichannel microphone array with high spatial resolution along the horizontal and vertical axes. A newly created dataset allows to examine voice directivity in classical singing with high resolution in angle and frequency. Three voice production modes (phonation modes) modal, breathy, and pressed that could affect the used mouth opening and voice directivity were investigated. We present detailed results for singing voice directivity and introduce metrics to discuss the differences of complex voice directivity patterns of the whole data in a more compact form. Differences were found between vowels, pitch, and gender (voice types with corresponding vocal range). Differences between the vowels /a:, e:, i:/ and /o:, u:/ and pitch can be addressed by simplified metrics up to about d2/D5/587 Hz, but we found that voice directivity generally depends strongly on pitch. Minor differences were found between voice production modes and found to be more pronounced for female singers. Voice directivity differs at low pitch between vowels with front vowels being most directional. We found that which of the front vowels is most directional depends on the evaluated pitch. This seems to be related to the complex radiation pattern of the human voice, which involves a large inter-subjective variability strongly influenced by the shape of the torso, head, and mouth. All recorded classical sung vowels at high pitches exhibit similar high directionality

    Audio Pitch Shifting Using the Constant-Q Transform

    No full text
    Pitch shifting of polyphonic music is usually performed by manipulating the time-frequency representation of the input signal. Most approaches proposed in the past are based on the Fourier transform although its linear frequency bin spacing is known to be inadequate to some degree for analyzing and processing music signals. Recently invertible constant-Q transforms (CQT) featuring high Q-factors have been proposed exhibiting a more suitable geometrical bin spacing. In this paper a frequency-domain pitch shifting approach based on the CQT is proposed. The CQT is specifically attractive for pitch shifting because it can be implemented by frequency translation (shifting partials along the frequency axis) as opposed to spectral stretching in the Fourier transform domain. Furthermore, the high time resolution of CQT at high frequencies improves transient preservation. Audio examples are provided to illustrate the results achieved with the proposed method
    corecore