5 research outputs found

    A New Selection Method of Anthropometric Parameters in Individualizing HRIR

    Get PDF
    A trend issue in modeling head-related impulse responses (HRIRs) is how to individualize HRIRs models that are convenient for a particular listener. The objective of this research is to show a robust selection method of eight anthropometric parameters out of all 27 parameters defined in CIPIC HRTF Database. The proposed selection method is systematically and scientifically acceptable, compared to ‘trial and error’ method in selecting the parameters. The selected anthropometric parameters of a given listener were applied in establishing multiple linear regression models in order to individualize his / her HRIRs. We modeled the entire minimum phase HRIRs in horizontal plane of 35 subjects using principal components analysis (PCA). The individual minimum phase HRIRs can be estimated adequately by a linear combination of ten orthonormal basis functions

    The Effectiveness of Chosen Partial Anthropometric Measurements in Individualizing Head-Related Transfer Functions on Median Plane

    Get PDF
    Individualized  head-related  impulse  responses  (HRIRs)  to  perfectly suit  a  particular  listener  remains  an  open  problem  in  the  area  of  HRIRs modeling.   We  have  modeled  the  whole  range  of  magnitude  of  head-related transfer  functions  (HRTFs)  in  frequency  domain  via  principal  components analysis  (PCA),  where  37  persons  were  subjected  to  sound  sources  on  median plane.   We  found  that  a  linear  combination  of  only  10  orthonormal  basis functions was sufficient to satisfactorily model individual magnitude HRTFs. It was our goal to form multiple linear regressions (MLR) between weights of basis functions acquired from PCA and chosen partial anthropometric  measurements in  order  to  individualize  a  particular  listener's  H RTFs  with  his  or  her  own anthropometries. We proposed a novel individualization method based on MLR of  weights  of  basis  functions  by  employing  only  8  out  of  27  anthropometric measurements.  The  experiments'  results  showed  the  proposed  method,  with mean  error  of  11.21%,  outperformed  our  previous  works  on  individualizing minimum  phase  HRIRs  (mean  error  22.50%)  and  magnitude  HRTFs  on horizontal  plane  (mean  error  12.17%)  as  well  as  similar  researches.  The proposed  individualization  method  showed  that  the  individualized  magnitude HRTFs could be well estimated as the original ones with a slight error.  Thus  the eight  chosen  anthropometric  measurements  showed  their  effectiveness  in individualizing magnitude HRTFs particularly on median plane.

    The Impact of an Accurate Vertical Localization with HRTFs on Short Explorations of Immersive Virtual Reality Scenarios

    Get PDF
    Achieving a full 3D auditory experience with head-related transfer functions (HRTFs) is still one of the main challenges of spatial audio rendering. HRTFs capture the listener's acoustic effects and personal perception, allowing immersion in virtual reality (VR) applications. This paper aims to investigate the connection between listener sensitivity in vertical localization cues and experienced presence, spatial audio quality, and attention. Two VR experiments with head-mounted display (HMD) and animated visual avatar are proposed: (i) a screening test aiming to evaluate the participants' localization performance with HRTFs for a non-visible spatialized audio source, and (ii) a 2 minute free exploration of a VR scene with five audiovisual sources in a both non-spatialized (2D stereo panning) and spatialized (free-field HRTF rendering) listening conditions. The screening test allows a distinction between good and bad localizers. The second one shows that no biases are introduced in the quality of the experience (QoE) due to different audio rendering methods; more interestingly, good localizers perceive a lower audio latency and they are less involved in the visual aspects

    Current Use and Future Perspectives of Spatial Audio Technologies in Electronic Travel Aids

    Get PDF
    Electronic travel aids (ETAs) have been in focus since technology allowed designing relatively small, light, and mobile devices for assisting the visually impaired. Since visually impaired persons rely on spatial audio cues as their primary sense of orientation, providing an accurate virtual auditory representation of the environment is essential. This paper gives an overview of the current state of spatial audio technologies that can be incorporated in ETAs, with a focus on user requirements. Most currently available ETAs either fail to address user requirements or underestimate the potential of spatial sound itself, which may explain, among other reasons, why no single ETA has gained a widespread acceptance in the blind community. We believe there is ample space for applying the technologies presented in this paper, with the aim of progressively bridging the gap between accessibility and accuracy of spatial audio in ETAs.This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement no. 643636.Peer Reviewe
    corecore