9 research outputs found

    HRTF Magnitude Synthesis via Sparse Representation of Anthropometric Features

    Get PDF
    International audienceWe propose a method for the synthesis of the magnitudes of Head-related Transfer Functions (HRTFs) using a sparse representation of anthropometric features.Our approach treats the HRTF synthesis problem as finding a sparse representation of the subject's anthropometric features w.r.t. the anthropometric features in the training set.The fundamental assumption is that the magnitudes of a given HRTF set can be described by the same sparse combination as the anthropometric data.Thus, we learn a sparse vector that represents the subject's anthropometric features as a linear superposition of the anthropometric features of a small subset of subjects from the training data.Then, we apply the same sparse vector directly on the HRTF tensor data.For evaluation purpose we use a new dataset, containing both anthropometric features and HRTFs.We compare the proposed sparse representation based approach with ridge regression and with the data of a manikin (which was designed based on average anthropometric data), and we simulate the best and the worst possible classifiers to select one of the HRTFs from the dataset.For instrumental evaluation we use log-spectral distortion.Experiments show that our sparse representation outperforms all other evaluated techniques, and that the synthesized HRTFs are almost as good as the best possible HRTF classifier

    HRTF selection by anthropometric regression for improving horizontal localization accuracy

    Get PDF
    This work focuses on objective Head-Related Transfer Function (HRTF) selection from anthropometric measurements for minimizing localization error in the frontal half of the horizontal plane. Localization predictions for every pair of 90 subjects in the HUTUBS database are first computed through an interaural time difference-based auditory model, and an error metric based on the predicted lateral error is derived. A multiple stepwise linear regression model for predicting error from inter-subject anthropometric differences is then built on a subset of subjects and evaluated on a complementary test set. Results show that by using just three anthropometric parameters of the head and torso (head width, head depth, and shoulder circumference) the model is able to identify non-individual HRTFs whose predicted horizontal localization error generally lies below the localization blur. When using a lower number of anthropometric parameters, this result is not guaranteed

    HRTF PHASE SYNTHESIS VIA SPARSE REPRESENTATION OF ANTHROPOMETRIC FEATURES

    Get PDF
    We propose a method for the synthesis of the phases of Head-Related Transfer Functions (HRTFs) using a sparse representation of anthropometric features. Our approach treats the HRTF synthesis problem as finding a sparse representation of the subjects anthropometric features w.r.t. the anthropometric features in the training set. The fundamental assumption is that the group delay of a given HRTF set can be described by the same sparse combination as the anthropometric data. Thus, we learn a sparse vector that represents the subjects anthropometric features as a linear superposition of the anthropometric features of a small subset of subjects from the training data. Then, we apply the same sparse vector directly on the HRTF group delay data. For evaluation purpose we use a new dataset, containing both anthropometric features and HRTFs. We compare the proposed sparse representation based approach with ridge regression and with the data of a manikin (which was designed based on average anthropometric data), and we simulate the best and the worst possible classifiers to select one of the HRTFs from the dataset. For objective evaluation we use the mean square error of the group delay scaling factor. Experiments show that our sparse representation outperforms all other evaluated techniques, and that the synthesized HRTFs are almost as good as the best possible HRTF classifier

    INTERAURAL TIME DELAY PERSONALISATION USING INCOMPLETE HEAD SCANS

    Get PDF
    ABSTRACT When using a set of generic head-related transfer functions (HRTFs) for spatial sound rendering, personalisation can be considered to minimise localisation errors. This typically involves tuning the characteristics of the HRTFs or a parametric model according to the listener's anthropometry. However, measuring anthropometric features directly remains a challenge in practical applications, and the mapping between anthropometric and acoustic features is an open research problem. Here we propose matching a face template to a listener's head scan or depth image to extract anthropometric information. The deformation of the template is used to personalise the interaural time differences (ITDs) of a generic HRTF set. The proposed method is shown to outperform reference methods when used with high-resolution 3-D scans. Experiments with single-frame depth images indicate that the method is applicable to lower resolution or partial scans which are quicker and easier to obtain than full 3-D scans. These results suggest that the proposed method may be a viable option for ITD personalisation in practical applications

    On the preprocessing and postprocessing of HRTF individualization based on sparse representation of anthropometric features

    Get PDF
    Individualization of head-related transfer functions (HRTFs) can be realized using the person's anthropometry with a pretrained model. This model usually establishes a direct linear or non-linear mapping from anthropometry to HRTFs in the training database. Due to the complex relation between anthropometry and HRTFs, the accuracy of this model depends heavily on the correct selection of the anthropometric features. To alleviate this problem and improve the accuracy of HRTF individualization, an indirect HRTF individualization framework was proposed recently, where HRTFs are synthesized using a sparse representation trained from the anthropometric features. In this paper, we extend their study on this framework by investigating the effects of different preprocessing and postprocessing methods on HRTF individualization. Our experimental results showed that preprocessing and postprocessing methods are crucial for achieving accurate HRTF individualization

    Surround by Sound: A Review of Spatial Audio Recording and Reproduction

    Get PDF
    In this article, a systematic overview of various recording and reproduction techniques for spatial audio is presented. While binaural recording and rendering is designed to resemble the human two-ear auditory system and reproduce sounds specifically for a listener’s two ears, soundfield recording and reproduction using a large number of microphones and loudspeakers replicate an acoustic scene within a region. These two fundamentally different types of techniques are discussed in the paper. A recent popular area, multi-zone reproduction, is also briefly reviewed in the paper. The paper is concluded with a discussion of the current state of the field and open problemsThe authors acknowledge National Natural Science Foundation of China (NSFC) No. 61671380 and Australian Research Council Discovery Scheme DE 150100363

    Current Use and Future Perspectives of Spatial Audio Technologies in Electronic Travel Aids

    Get PDF
    Electronic travel aids (ETAs) have been in focus since technology allowed designing relatively small, light, and mobile devices for assisting the visually impaired. Since visually impaired persons rely on spatial audio cues as their primary sense of orientation, providing an accurate virtual auditory representation of the environment is essential. This paper gives an overview of the current state of spatial audio technologies that can be incorporated in ETAs, with a focus on user requirements. Most currently available ETAs either fail to address user requirements or underestimate the potential of spatial sound itself, which may explain, among other reasons, why no single ETA has gained a widespread acceptance in the blind community. We believe there is ample space for applying the technologies presented in this paper, with the aim of progressively bridging the gap between accessibility and accuracy of spatial audio in ETAs.This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement no. 643636.Peer Reviewe

    Aprendizado de variedades para a síntese de áudio espacial

    Get PDF
    Orientadores: Luiz César Martini, Bruno Sanches MasieroTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: O objetivo do áudio espacial gerado com a técnica binaural é simular uma fonte sonora em localizações espaciais arbitrarias através das Funções de Transferência Relativas à Cabeça (HRTFs) ou também chamadas de Funções de Transferência Anatômicas. As HRTFs modelam a interação entre uma fonte sonora e a antropometria de uma pessoa (e.g., cabeça, torso e orelhas). Se filtrarmos uma fonte de áudio através de um par de HRTFs (uma para cada orelha), o som virtual resultante parece originar-se de uma localização espacial específica. Inspirados em nossos resultados bem sucedidos construindo uma aplicação prática de reconhecimento facial voltada para pessoas com deficiência visual que usa uma interface de usuário baseada em áudio espacial, neste trabalho aprofundamos nossa pesquisa para abordar vários aspectos científicos do áudio espacial. Neste contexto, esta tese analisa como incorporar conhecimentos prévios do áudio espacial usando uma nova representação não-linear das HRTFs baseada no aprendizado de variedades para enfrentar vários desafios de amplo interesse na comunidade do áudio espacial, como a personalização de HRTFs, a interpolação de HRTFs e a melhoria da localização de fontes sonoras. O uso do aprendizado de variedades para áudio espacial baseia-se no pressuposto de que os dados (i.e., as HRTFs) situam-se em uma variedade de baixa dimensão. Esta suposição também tem sido de grande interesse entre pesquisadores em neurociência computacional, que argumentam que as variedades são cruciais para entender as relações não lineares subjacentes à percepção no cérebro. Para todas as nossas contribuições usando o aprendizado de variedades, a construção de uma única variedade entre os sujeitos através de um grafo Inter-sujeito (Inter-subject graph, ISG) revelou-se como uma poderosa representação das HRTFs capaz de incorporar conhecimento prévio destas e capturar seus fatores subjacentes. Além disso, a vantagem de construir uma única variedade usando o nosso ISG e o uso de informações de outros indivíduos para melhorar o desempenho geral das técnicas aqui propostas. Os resultados mostram que nossas técnicas baseadas no ISG superam outros métodos lineares e não-lineares nos desafios de áudio espacial abordados por esta teseAbstract: The objective of binaurally rendered spatial audio is to simulate a sound source in arbitrary spatial locations through the Head-Related Transfer Functions (HRTFs). HRTFs model the direction-dependent influence of ears, head, and torso on the incident sound field. When an audio source is filtered through a pair of HRTFs (one for each ear), a listener is capable of perceiving a sound as though it were reproduced at a specific location in space. Inspired by our successful results building a practical face recognition application aimed at visually impaired people that uses a spatial audio user interface, in this work we have deepened our research to address several scientific aspects of spatial audio. In this context, this thesis explores the incorporation of spatial audio prior knowledge using a novel nonlinear HRTF representation based on manifold learning, which tackles three major challenges of broad interest among the spatial audio community: HRTF personalization, HRTF interpolation, and human sound localization improvement. Exploring manifold learning for spatial audio is based on the assumption that the data (i.e. the HRTFs) lies on a low-dimensional manifold. This assumption has also been of interest among researchers in computational neuroscience, who argue that manifolds are crucial for understanding the underlying nonlinear relationships of perception in the brain. For all of our contributions using manifold learning, the construction of a single manifold across subjects through an Inter-subject Graph (ISG) has proven to lead to a powerful HRTF representation capable of incorporating prior knowledge of HRTFs and capturing the underlying factors of spatial hearing. Moreover, the use of our ISG to construct a single manifold offers the advantage of employing information from other individuals to improve the overall performance of the techniques herein proposed. The results show that our ISG-based techniques outperform other linear and nonlinear methods in tackling the spatial audio challenges addressed by this thesisDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétrica2014/14630-9FAPESPCAPE

    Interaural time difference individualization in HRTF by scaling through anthropometric parameters

    Full text link
    [EN] Head-related transfer function (HRTF) individualization can improve the perception of binaural sound. The interaural time difference (ITD) of the HRTF is a relevant cue for sound localization, especially in azimuth. Therefore, individualization of the ITD is likely to result in better sound spatial localization. A study of ITD has been conducted from a perceptual point of view using data from individual HRTF measurements and subjective perceptual tests. Two anthropometric dimensions have been demonstrated in relation to the ITD, predicting the subjective behavior of various subjects in a perceptual test. With this information, a method is proposed to individualize the ITD of a generic HRTF set by adapting it with a scale factor, which is obtained by a linear regression formula dependent on the two previous anthropometric dimensions. The method has been validated with both objective measures and another perceptual test. In addition, practical regression formula coefficients are provided for fitting the ITD of the generic HRTFs of the widely used Brüel & Kjær 4100 and Neumann KU100 binaural dummy heads.This work has received funding from the Spanish Ministry of Science and Innovation through the project RTI2018-097045-B-C22, and from the Spanish Ministry of Universities under the "Margarita Salas" program supported by the NextGenerationEU funds of the European Union.Gutiérrez-Parera, P.; López Monfort, JJ.; Mora-Merchan, JM.; Larios, DF. (2022). Interaural time difference individualization in HRTF by scaling through anthropometric parameters. EURASIP Journal on Audio, Speech and Music Processing. 2022(1):1-19. https://doi.org/10.1186/s13636-022-00241-y11920221H. Møller, Fundamentals of binaural technology. Appl. Acoust.36(3-4), 171–218 (1992). https://doi.org/10.1016/0003-682X(92)90046-U.H. Møller, M. F. Sørensen, C. B. Jensen, D. Hammershøi, Binaural technique: Do we need individual recordingsJ. Audio Eng. Soc.44(6), 451–464 (1996).V. R. Algazi, C. Avendano, R. O. Duda, Elevation localization and head-related transfer function analysis at low frequencies. J. Acoust. Soc. Am.109(3), 1110–1122 (2001). https://doi.org/10.1121/1.1349185.E. M. Wenzel, M. Arruda, D. J. Kistler, F. L. Wightman, Localization using nonindividualized head-related transfer functions. J. Acoust. Soc. Am.94(1), 111–123 (1993). https://doi.org/10.1121/1.407089.J. C. Middlebrooks, Virtual localization improved by scaling nonindividualized external-ear transfer functions in frequency,. J. Acoust. Soc. Am.106(3 Pt 1), 1493–1510 (1999). https://doi.org/10.1121/1.427147.D. R. Begault, E. M. Wenzel, M. R. Anderson, Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source. J. Audio Eng. Soc.49(10), 904–916 (2001).B. U. Seeber, H. Fastl, in Proceedings of the 2003 International Conference on Auditory Display. Subjective selection of non-individual head-related transfer functions (Georgia Institute of TechnologyBoston University, 2003), pp. 1–4.J. C. Middlebrooks, Individual differences in external-ear transfer functions reduced by scaling in frequency. J. Acoust. Soc. Am.106(3), 1480–1492 (1999). https://doi.org/10.1121/1.427176.R. Pelzer, M. Dinakaran, F. Brinkmann, S. Lepa, P. Grosche, S. Weinzierl, Head-related transfer function recommendation based on perceptual similarities and anthropometric features. J. Acoust. Soc. Am.148(6), 3809–3817 (2020). https://doi.org/10.1121/10.0002884.E. A. Torres-Gallegos, F. Orduña-Bustamante, F. Arámbula-Cosío, Personalization of head-related transfer functions (HRTF) based on automatic photo-anthropometry and inference from a database. Appl. Acoust.97:, 84–95 (2015). https://doi.org/10.1016/j.apacoust.2015.04.009.F. Brinkmann, M. Dinakaran, R. Pelzer, P. Grosche, D. Voss, S. Weinzierl, A cross-evaluated database of measured and simulated HRTFs including 3D head meshes, anthropometric features, and headphone impulse responses. J. Audio Eng. Soc.67(9), 705–718 (2019). https://doi.org/10.17743/jaes.2019.0024.B. F. G. Katz, Boundary element method calculation of individual head-related transfer function. I. Rigid model calculation. J. Acoust. Soc. Am.110(5), 2440–2448 (2001). https://doi.org/10.1121/1.1412440.A. Roginska, P. Geluso, Immersive Sound: the Art and Science of Binaural and Multi-channel Audio (Focal Press, New York, 2017).K. Sunder, J He, EL Tan, W-S Gan, Natural Sound Rendering for Headphones: Integration of signal processing techniques. IEEE Signal Proc. Mag.32(2), 100–113 (2015). https://doi.org/10.1109/MSP.2014.2372062.J. W. Strutt (Lord Rayleigh), On our perception of sound direction. Lond. Edinb. Dublin Philos. Mag. J. Sci.13(74), 214–232 (1907). https://doi.org/10.1080/14786440709463595.F. L. Wightman, D. J. Kistler, The dominant role of low-frequency inter aural time differences in sound localization. J. Acoust. Soc. Am.91(3), 1648–1661 (1992). https://doi.org/10.1121/1.402445.M. T. Pastore, J. Braasch, The impact of peripheral mechanisms on the precedence effect. J. Acoust. Soc. Am.146(1), 425–444 (2019). https://doi.org/10.1121/1.5116680.R. S. Woodworth, H. Schlosberg, Experimental Psychology, Rev. Ed (Holt, Oxford, 1954).G. F. Kuhn, Model for the interaural time differences in the azimuthal plane. J. Acoust. Soc. Am.62(1), 157–167 (1977). https://doi.org/10.1121/1.381498.V. Larcher, J. -M. Jot, in Proceedings of the Congrès Français d’Acoustique. Techniques d’interpolation de filtres audio-numérique, Application à la reproduction spatiale des sons sur écouteurs (Société française d’acoustique SFA, 1997). https://hal.archives-ouvertes.fr/hal-01106982.L. Savioja, J. Huopaniemi, T. Lokki, R. Väänänen, Creating Interactive Virtual Acoustic Environments. J. Audio Eng. Soc.47(9), 675–705 (1999).V. R. Algazi, C. Avendano, R. O. Duda, Estimation of a spherical-head model from anthropometry. J. Audio Eng. Soc.49(6), 472–479 (2001). https://doi.org/10.1017/CBO9781107415324.004.S. Busson, Individualisation d’indices acoustiques pour la synthèse binaurale. PhD thesis, Université de la Méditerranée - Aix-Marseille II (2006).V. R. Algazi, R. O. Duda, R. Duraiswami, N. A. Gumerov, Z. Tang, Approximating the head-related transfer function using simple geometric models of the head and torso. J. Acoust. Soc. Am.112(5), 2053–2064 (2002). https://doi.org/10.1121/1.1508780.R. O. Duda, C. Avendano, V. R. Algazi, in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol 2. An adaptable ellipsoidal head model for the interaural time difference (IEEE, 1999), pp. 965–968. https://doi.org/10.1109/ICASSP.1999.759855.R. Bomhardt, M. Lins, J. Fels, Analytical Ellipsoidal Model of Interaural Time Differences for the Individualization of Head-Related Impulse Responses. J. Audio Eng. Soc.64(11), 882–893 (2016). https://doi.org/10.17743/jaes.2016.0041.M. Aussal, F. Alouges, B. F. G. Katz, in Spatial Audio in Today’s 3D World - AES 25th UK Conference. ITD Interpolation and Personalization for Binaural Synthesis using Spherical Harmonics (Audio Engineering SocietyYork, England, 2012).P. Bilinski, J. Ahrens, M. R. P. Thomas, I. J. Tashev, J. C. Platt, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). HRTF magnitude synthesis via sparse representation of anthropometric features (IEEEFlorence, 2014), pp. 4468–4472. https://doi.org/10.1109/ICASSP.2014.6854447.R. Bomhardt, H. Braren, J. Fels, in Proceedings of Meetings on Acoustics, vol 29. Individualization of head-related transfer functions using principal component analysis and anthropometric dimensions (Acoustical Society of AmericaHonolulu, 2016), p. 050007. https://doi.org/10.1121/2.0000562.X. Zhong, B. Xie, An individualized interaural time difference model based on spherical harmonic function expansion. Chin. J. Acoust.32(3), 284 (2013).X. Zhong, B. Xie, A novel model of interaural time difference based on spatial fourier analysis. Chin. Phys. Lett.24(5), 1313–1316 (2007). https://doi.org/10.1088/0256-307X/24/5/052.I. Tashev, in 2014 Information Theory and Applications Workshop (ITA). Hrtf Phase Synthesis Via Sparse Representation of Anthropometric Features (IEEESan Diego, 2014), pp. 1–5. https://doi.org/10.1109/ITA.2014.6804239.H. Gamper, D. Johnston, I. J. Tashev, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Interaural time delay personalisation using incomplete head scans (IEEENew Orleans, 2017), pp. 461–465. https://doi.org/10.1109/ICASSP.2017.7952198.A. Lindau, J. Estrella, S. Weinzierl, in Proc of the 128th AES Convention. Individualization of dynamic binaural synthesis by real time manipulation of the ITD (Audio Engineering SocietyLondon, 2010).J. J. Lopez, P. Gutierrez-Parera, in Audio Engineering Society 142nd Convention. Equipment for fast measurement of Head-Related Transfer Functions (Audio Engineering SocietyBerlin, 2017), p. 335.J. J. Lopez, P. Gutierrez-Parera, M. Cobos, Compensating first reflections in non-anechoic head-related transfer function measurements. Appl. Acoust.188:, 108523 (2022). https://doi.org/10.1016/j.apacoust.2021.108523.Brüel & Kjær, TYPE 4100 - Brüel & Kjær Sound & Vibration, sound quality Head and Torso Simulator. https://www.bksv.com/en/products/transducers/ear-simulators/head-and-torso/hats-type-4100. Accessed 25 09 2019.F. Christensen, G. Martin, P. Minnaar, W. K. Song, B. Pedersen, M. Lydolf, in Audio Engineering Society 118th Convention, vol 1. A listening test system for automotive audio - Part 1: System description (Barcelona, 2005), pp. 163–172.Georg Neumann GmbH, Neumann KU100 Dummy head. https://en-de.neumann.com/ku-100. Accessed 25 09 2019.A. Andreopoulou, D. R. Begault, B. F. G. Katz, Inter-Laboratory Round Robin HRTF Measurement Comparison. IEEE J Sel Top Signal Proc.9(5), 895–906 (2015). https://doi.org/10.1109/JSTSP.2015.2400417.M. Karjalainen, T. Paatero, in IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics. Frequency-dependent signal windowing (IEEENew Paltz, 2001), pp. 35–38. https://doi.org/10.1109/aspaa.2001.969536.F. Denk, B. Kollmeier, S. D. Ewert, Removing reflections in semianechoic impulse responses by frequency-dependent truncation. J. Audio Eng. Soc.66(3), 146–153 (2018). https://doi.org/10.17743/jaes.2018.0002.S. Fontana, A. Farina, in Audio Engineering Society 120th Convention. A System for Rapid Measurement and Direct Customization of Head Related Impulse Responses (Audio Engineering SocietyParis, 2006).J. Gómez Bolaños, V. Pulkki, in Audio Engineering Society 133rd Convention. HRIR database with measured actual source direction data (Audio Engineering SocietyNew York, 2012).J. Gómez Bolaños, A. Mäkivirta, V. Pulkki, Automatic Regularization Parameter for Headphone Transfer Function Inversion. J. Audio Eng. Soc.64(10), 752–761 (2016). https://doi.org/10.17743/jaes.2016.0030.Mathworks, MATLAB Camera calibrator App. https://es.mathworks.com/help/vision/ref/cameracalibrator-app.html. Accessed 21 Dec 2021.K. Watanabe, K. Ozawa, Y. Iwaya, Y. Suzuki, K. Aso, Estimation of interaural level difference based on anthropometry and its effect on sound localization. J. Acoust. Soc. Am.122(5), 2832–2841 (2007). https://doi.org/10.1121/1.2785039.M. Zhang, R. A. Kennedy, T. D. Abhayapala, W. Zhang, in 2011 Joint Workshop on Hands-free Speech Communication and Microphone Arrays, HSCMA’11. Statistical method to identify key anthropometric parameters in hrtf individualization (IEEE, 2011), pp. 213–218. https://doi.org/10.1109/HSCMA.2011.5942401.B. F. G. Katz, M. Noisternig, A comparative study of interaural time delay estimation methods. J. Acoust. Soc. Am.135(6), 3530–3540 (2014). https://doi.org/10.1121/1.4875714.A. Andreopoulou, B. F. G. Katz, Identification of perceptually relevant methods of inter-aural time difference estimation. J. Acoust. Soc. Am.142(2), 588–598 (2017). https://doi.org/10.1121/1.4996457.T. Nishino, N. Inoue, K. Takeda, F. Itakura, Estimation of HRTFs on the horizontal plane using physical features. Appl. Acoust.68(8), 897–908 (2007). https://doi.org/10.1016/j.apacoust.2006.12.010.M. Romanov, P. Berghold, D. Rudrich, M. Zaunschirm, M. Frank, F. Zotter, in Audio Engineering Society 142nd Convention. Implementation and Evaluation of a Low-cost Head-tracker for Binaural Synthesis (Audio Engineering SocietyBerlin, 2017), pp. 1–6.Z. Ben-Hur, D. L. Alon, P. W. Robinson, R. Mehra, in Proceedings of the AES International Conference on Audio for Virtual and Augmented Reality, vol August. Localization of virtual sounds in dynamic listening using sparse HRTFs (Audio Engineering SocietyNew York, 2020).S. Werner, G. Götz, F. Klein, in Audio Engineering Society 142nd International Convention. Influence of head tracking on the externalization of auditory events at divergence between synthesized and listening room using a binaural headphone system (Audio Engineering SocietyBerlin, 2017).J. Oberem, J. G. Richter, D. Setzer, J. Seibold, I. Koch, J. Fels, Experiments on localization accuracy with non-individual and individual HRTFs comparing static and dynamic reproduction methods. bioRxiv (2020). https://doi.org/10.1101/2020.03.31.011650.B. Rosner, Percentage points for a generalized esd many-outlier procedure. Technometrics. 25(2), 165–172 (1983). https://doi.org/10.1080/00401706.1983.10487848.A. Andreopoulou, B. F. G. Katz, Subjective HRTF evaluations for obtaining global similarity metrics of assessors and assessees. J. Multimodal User Interfaces. 10(3), 259–271 (2016). https://doi.org/10.1007/s12193-016-0214-y.C. Armstrong, L. Thresh, D. Murphy, G. Kearney, A Perceptual Evaluation of Individual and Non-Individual HRTFs: A Case Study of the SADIE II Database. Appl. Sci.8(11), 2029 (2018). https://doi.org/10.3390/app8112029.A. Andreopoulou, B. F. G. Katz, in Audio Engineering Society 140th Convention. Investigation on Subjective HRTF Rating Repeatability (Audio Engineering SocietyParis, 2016).B. G. Shinn-Cunningham, N. I. Durlach, R. M. Held, Adapting to supernormal auditory localization cues. I. Bias and resolution. J. Acoust. Soc. Am.103(6), 3656–3666 (1998). https://doi.org/10.1121/1.423088.L. Kaufman, P. J. Rousseeuw, Finding Groups in Data: an Introduction to Cluster Analysis (Wiley, 1990). https://doi.org/10.1002/9780470316801.H. Hu, L. Zhou, J. Zhang, H. Ma, Z. Wu, in 2006 International Conference on Computational Intelligence and Security, ICCIAS 2006, vol 2. Head related transfer function personalization based on multiple regression analysis (IEEE, 2007), pp. 1829–1832. https://doi.org/10.1109/ICCIAS.2006.295380.W. W. Hugeng, D. Gunawan, Improved method for individualization of Head-Related Transfer Functions on horizontal plane using reduced number of anthropometric measurements. J. Telecommun.2(2), 31–41 (2010). http://arxiv.org/abs/1005.5137.C. Mendonça, G. Campos, P. Dias, J. A. Santos, Learning Auditory Space: Generalization and Long-Term Effects. PLoS ONE. 8(10) (2013). https://doi.org/10.1371/journal.pone.0077900
    corecore