508 research outputs found

    Technical aspects of a demonstration tape for three-dimensional sound displays

    Get PDF
    This document was developed to accompany an audio cassette that demonstrates work in three-dimensional auditory displays, developed at the Ames Research Center Aerospace Human Factors Division. It provides a text version of the audio material, and covers the theoretical and technical issues of spatial auditory displays in greater depth than on the cassette. The technical procedures used in the production of the audio demonstration are documented, including the methods for simulating rotorcraft radio communication, synthesizing auditory icons, and using the Convolvotron, a real-time spatialization device

    Sound Source Separation

    Get PDF
    This is the author's accepted pre-print of the article, first published as G. Evangelista, S. Marchand, M. D. Plumbley and E. Vincent. Sound source separation. In U. Zölzer (ed.), DAFX: Digital Audio Effects, 2nd edition, Chapter 14, pp. 551-588. John Wiley & Sons, March 2011. ISBN 9781119991298. DOI: 10.1002/9781119991298.ch14file: Proof:e\EvangelistaMarchandPlumbleyV11-sound.pdf:PDF owner: markp timestamp: 2011.04.26file: Proof:e\EvangelistaMarchandPlumbleyV11-sound.pdf:PDF owner: markp timestamp: 2011.04.2

    The Effectiveness of Chosen Partial Anthropometric Measurements in Individualizing Head-Related Transfer Functions on Median Plane

    Get PDF
    Individualized  head-related  impulse  responses  (HRIRs)  to  perfectly suit  a  particular  listener  remains  an  open  problem  in  the  area  of  HRIRs modeling.   We  have  modeled  the  whole  range  of  magnitude  of  head-related transfer  functions  (HRTFs)  in  frequency  domain  via  principal  components analysis  (PCA),  where  37  persons  were  subjected  to  sound  sources  on  median plane.   We  found  that  a  linear  combination  of  only  10  orthonormal  basis functions was sufficient to satisfactorily model individual magnitude HRTFs. It was our goal to form multiple linear regressions (MLR) between weights of basis functions acquired from PCA and chosen partial anthropometric  measurements in  order  to  individualize  a  particular  listener's  H RTFs  with  his  or  her  own anthropometries. We proposed a novel individualization method based on MLR of  weights  of  basis  functions  by  employing  only  8  out  of  27  anthropometric measurements.  The  experiments'  results  showed  the  proposed  method,  with mean  error  of  11.21%,  outperformed  our  previous  works  on  individualizing minimum  phase  HRIRs  (mean  error  22.50%)  and  magnitude  HRTFs  on horizontal  plane  (mean  error  12.17%)  as  well  as  similar  researches.  The proposed  individualization  method  showed  that  the  individualized  magnitude HRTFs could be well estimated as the original ones with a slight error.  Thus  the eight  chosen  anthropometric  measurements  showed  their  effectiveness  in individualizing magnitude HRTFs particularly on median plane.

    Anthropometric Individualization of Head-Related Transfer Functions Analysis and Modeling

    Get PDF
    Human sound localization helps to pay attention to spatially separated speakers using interaural level and time differences as well as angle-dependent monaural spectral cues. In a monophonic teleconference, for instance, it is much more difficult to distinguish between different speakers due to missing binaural cues. Spatial positioning of the speakers by means of binaural reproduction methods using head-related transfer functions (HRTFs) enhances speech comprehension. These HRTFs are influenced by the torso, head and ear geometry as they describe the propagation path of the sound from a source to the ear canal entrance. Through this geometry-dependency, the HRTF is directional and subject-dependent. To enable a sufficient reproduction, individual HRTFs should be used. However, it is tremendously difficult to measure these HRTFs. For this reason this thesis proposes approaches to adapt the HRTFs applying individual anthropometric dimensions of a user. Since localization at low frequencies is mainly influenced by the interaural time difference, two models to adapt this difference are developed and compared with existing models. Furthermore, two approaches to adapt the spectral cues at higher frequencies are studied, improved and compared. Although the localization performance with individualized HRTFs is slightly worse than with individual HRTFs, it is nevertheless still better than with non-individual HRTFs, taking into account the measurement effort

    頭部伝達関数の空間領域特性モデリング

    Get PDF
    Tohoku University鈴木陽一課
    corecore