26 research outputs found

    The Impact of an Accurate Vertical Localization with HRTFs on Short Explorations of Immersive Virtual Reality Scenarios

    Get PDF
    Achieving a full 3D auditory experience with head-related transfer functions (HRTFs) is still one of the main challenges of spatial audio rendering. HRTFs capture the listener's acoustic effects and personal perception, allowing immersion in virtual reality (VR) applications. This paper aims to investigate the connection between listener sensitivity in vertical localization cues and experienced presence, spatial audio quality, and attention. Two VR experiments with head-mounted display (HMD) and animated visual avatar are proposed: (i) a screening test aiming to evaluate the participants' localization performance with HRTFs for a non-visible spatialized audio source, and (ii) a 2 minute free exploration of a VR scene with five audiovisual sources in a both non-spatialized (2D stereo panning) and spatialized (free-field HRTF rendering) listening conditions. The screening test allows a distinction between good and bad localizers. The second one shows that no biases are introduced in the quality of the experience (QoE) due to different audio rendering methods; more interestingly, good localizers perceive a lower audio latency and they are less involved in the visual aspects

    Virtual Reality Exploration with Different Head-Related Transfer Functions

    Get PDF
    One of the main challenges of spatial audio rendering in headphones is the crucial work behind the personalization of the so-called head-related transfer functions (HRTFs). HRTFs capture the listener's acoustic effects allowing a personal perception of immersion in virtual reality context. This paper aims to investigate the possible benefits of personalized HRTFs that were individually selected based on anthropometric data (pinnae shapes). Personalized audio rendering was compared to a generic HRTF and a stereo sound condition. Two studies were performed; the first study consisted of a screening test aiming to evaluate the participants' localization performance with HRTFs for a non-visible spatialized audio source. The second experiment allowed the participants to freely explore a VR scene with five audiovisual sources for two minutes each, with both HRTF and stereo conditions. A questionnaire with items for spatial audio quality, presence and attention was used for the evaluation. Results indicate that audio rendering methods made no difference on responses to the questionnaire in the two minutes of a free exploration

    Feasibility studies for the measurement of time-like proton electromagnetic form factors from p¯ p→ μ+μ- at P ¯ ANDA at FAIR

    Get PDF
    This paper reports on Monte Carlo simulation results for future measurements of the moduli of time-like proton electromagnetic form factors, | GE| and | GM| , using the p¯ p→ μ+μ- reaction at P ¯ ANDA (FAIR). The electromagnetic form factors are fundamental quantities parameterizing the electric and magnetic structure of hadrons. This work estimates the statistical and total accuracy with which the form factors can be measured at P ¯ ANDA , using an analysis of simulated data within the PandaRoot software framework. The most crucial background channel is p¯ p→ π+π-, due to the very similar behavior of muons and pions in the detector. The suppression factors are evaluated for this and all other relevant background channels at different values of antiproton beam momentum. The signal/background separation is based on a multivariate analysis, using the Boosted Decision Trees method. An expected background subtraction is included in this study, based on realistic angular distributions of the background contribution. Systematic uncertainties are considered and the relative total uncertainties of the form factor measurements are presented

    Peek-a-book: Playing with an interactive book

    Get PDF
    Presented at the 12th International Conference on Auditory Display (ICAD), London, UK, June 20-23, 2006.This demonstration is about a prototype of a new digitally augmented book for children, using sensors to allow continuous user interaction and to generate (not just play back) sounds in real time. During the demonstration the user will experience the book, intuitively modifying and controlling the sound generation process

    Illusions, auditory

    No full text
    Definition: Auditory illusions occurs when the listener hears sounds that are not present in the stimulus and our brain organizes and interprets sensory stimulation producing a distortion of a sensory perception. It is possible to distinguish between classical examples of auditory illusions and the illusions that emerge because of the interplay of audition with multisensory perception. Auditory illusions can be then taken into account in designing Enactive Interfaces for their possible creative uses. A good parallel can be drawn with visual illusions, often used by hyper realistic painting (e.g. the image of a mirror without the painter that is painting it, etc.). Auditory illusions in an immersive, enactive environment can be made to be much more striking than in a usual one because an immersive environment allows to enhance the cooperation between modalities. The interaction between hearing, vision and haptics can be tightly controlled in such an environment and the cooperation between these modalities can therefore be increased or decreased at will

    Virtual reality exploration with different head-related transfer functions

    Get PDF
    One of the main challenges of spatial audio rendering in headphones is the crucial work behind the personalization of the so-called head-related transfer functions (HRTFs). HRTFs capture the listener's acoustic effects allowing a personal perception of immersion in virtual reality context. This paper aims to investigate the possible benefits of personalized HRTFs that were individually selected based on anthropometric data (pinnae shapes). Personalized audio rendering was compared to a generic HRTF and a stereo sound condition. Two studies were performed; the first study consisted of a screening test aiming to evaluate the participants' localization performance with HRTFs for a non-visible spatialized audio source. The second experiment allowed the participants to freely explore a VR scene with five audiovisual sources for two minutes each, with both HRTF and stereo conditions. A questionnaire with items for spatial audio quality, presence and attention was used for the evaluation. Results indicate that audio rendering methods made no difference on responses to the questionnaire in the two minutes of a free exploration

    The voice painter

    No full text
    Very often when looking at a painting or touching a sculpture it is possible to experience the meaning of being a perceiver as enactor of perceptual content. The piece of art that we try to explore forces us to move around the object in order to discover new meanings and sensations. We need to interact with the artistic object in order to completely understand it. Is it possible to think about sound and music in this way? Is the auditory and musical experience prone to such investigation? This paper tries to describe an enactive system that, by merging the concepts of autographic and allographic arts, transforms the spectator of a multimodal performance into the performer--perceiver--enactor. The voice painter, an instrument to paint with our voice movement in a closed loop interaction, offers a new artistic metaphor as well as a potentially useful tool for speech therapy programs

    Designing interactive sound for motor rehabilitation tasks

    No full text
    Technology-assisted motor rehabilitation is today one of the most potentially interesting application areas for research in SID. The strong social implications, the novelty of such a rapidly advancing field, as well as its inherently interdisciplinary nature (contents combine topics in robotics, virtual reality, and haptics as well as neuroscience and rehabilitation) are some of the aspects that consolidate its challenging and captivating character. Such prospects justify the considerable amount of attention it has received in the last decade from researchers in the fields of both medicine and engineering, the purpose of their joint effort being the development of innovative methods to treat motor disabilities occurring as a consequence of several possible traumatic (physical or neurological) injuries. The final goal of the designed rehabilitation process is to facilitate reintegration of patients into social and domestic life by helping them regain the ability to autonomously perform activities of daily living (ADLs, e.g., eating or walking). However, such activities embody complex motor tasks for which current rehabilitation systems lack the sophistication needed in order to assist patients during their performance. Much work is needed to address challenges related to hardware, software, control system design, as well as effective approaches for delivering treatment [ 13 ]. In particular, although it is understood that multimodal feedback can be used to improve the performance in complex motor tasks [ 9 ], a thorough analysis of the literature in this field shows that the potential of auditory feedback is largely underestimated
    corecore