4 research outputs found

    HRTF individualization using deep learning

    Get PDF
    The research presented in this paper focuses on Head-Related Transfer Function (HRTF) individualization using deep learning techniques. HRTF individualization is paramount for accurate binaural rendering, which is used in XR technologies, tools for the visually impaired, and many other applications. The rising availability of public HRTF data currently allows experimentation with different input data formats and various computational models. Accordingly, three research directions are investigated here: (1) extraction of predictors from user data; (2) unsupervised learning of HRTFs based on autoencoder networks; and (3) synthesis of HRTFs from anthropometric data using deep multilayer perceptrons and principal component analysis. While none of the aforementioned investigations has shown outstanding results to date, the knowledge acquired throughout the development and troubleshooting phases highlights areas of improvement which are expected to pave the way to more accurate models for HRTF individualization

    The Viking HRTF dataset v2

    Get PDF
    The Viking HRTF dataset v2 is a collection of head-related transfer functions (HRTFs) measured at the University of Iceland. It includes full-sphere HRTFs measured on a dense spatial grid (1513 positions) with a KEMAR mannequin with different pairs of artificial pinnae attached. The artificial pinnae were previously obtained through a custom molding procedure from different lifelike human heads (courtesy of Ernst Backman, Saga Museum Reykjavík)

    Spatial Audio and Individualized HRTFs using a Convolutional Neural Network (CNN)

    Full text link
    Spatial audio and 3-Dimensional sound rendering techniques play a pivotal and essential role in immersive audio experiences. Head-Related Transfer Functions (HRTFs) are acoustic filters which represent how sound interacts with an individual's unique head and ears anatomy. The use of HRTFs compliant to the subjects anatomical traits is crucial to ensure a personalized and unique spatial experience. This work proposes the implementation of an HRTF individualization method based on anthropometric features automatically extracted from ear images using a Convolutional Neural Network (CNN). Firstly, a CNN is implemented and tested to assess the performance of machine learning on positioning landmarks on ear images. The I-BUG dataset, containing ear images with corresponding 55 landmarks, was used to train and test the neural network. Subsequently, 12 relevant landmarks were selected to correspond to 7 specific anthropometric measurements established by the HUTUBS database. These landmarks serve as a reference for distance computation in pixels in order to retrieve the anthropometric measurements from the ear images. Once the 7 distances in pixels are extracted from the ear image, they are converted in centimetres using conversion factors, a best match method vector is implemented computing the Euclidean distance for each set in a database of 116 ears with their corresponding 7 anthropometric measurements provided by the HUTUBS database. The closest match of anthropometry can be identified and the corresponding set of HRTFs can be obtained for personnalized use. The method is evaluated in its validity instead of the accuracy of the results. The conceptual scope of each stage has been verified and substantiated to function correctly. The various steps and the available elements in the process are reviewed and challenged to define a greater algorithm entity designed for the desired task
    corecore