17 research outputs found

    Virtual Reality Exploration with Different Head-Related Transfer Functions

    Get PDF
    One of the main challenges of spatial audio rendering in headphones is the crucial work behind the personalization of the so-called head-related transfer functions (HRTFs). HRTFs capture the listener's acoustic effects allowing a personal perception of immersion in virtual reality context. This paper aims to investigate the possible benefits of personalized HRTFs that were individually selected based on anthropometric data (pinnae shapes). Personalized audio rendering was compared to a generic HRTF and a stereo sound condition. Two studies were performed; the first study consisted of a screening test aiming to evaluate the participants' localization performance with HRTFs for a non-visible spatialized audio source. The second experiment allowed the participants to freely explore a VR scene with five audiovisual sources for two minutes each, with both HRTF and stereo conditions. A questionnaire with items for spatial audio quality, presence and attention was used for the evaluation. Results indicate that audio rendering methods made no difference on responses to the questionnaire in the two minutes of a free exploration

    The Impact of an Accurate Vertical Localization with HRTFs on Short Explorations of Immersive Virtual Reality Scenarios

    Get PDF
    Achieving a full 3D auditory experience with head-related transfer functions (HRTFs) is still one of the main challenges of spatial audio rendering. HRTFs capture the listener's acoustic effects and personal perception, allowing immersion in virtual reality (VR) applications. This paper aims to investigate the connection between listener sensitivity in vertical localization cues and experienced presence, spatial audio quality, and attention. Two VR experiments with head-mounted display (HMD) and animated visual avatar are proposed: (i) a screening test aiming to evaluate the participants' localization performance with HRTFs for a non-visible spatialized audio source, and (ii) a 2 minute free exploration of a VR scene with five audiovisual sources in a both non-spatialized (2D stereo panning) and spatialized (free-field HRTF rendering) listening conditions. The screening test allows a distinction between good and bad localizers. The second one shows that no biases are introduced in the quality of the experience (QoE) due to different audio rendering methods; more interestingly, good localizers perceive a lower audio latency and they are less involved in the visual aspects

    The Illustrated Ivan: Ivan IV in the \u3ci\u3eIllustrated Chronicle Compilation\u3c/i\u3e

    Get PDF
    The surviving segments of the incomplete Illustrated Chronicle Compilation (LLS), in both text and miniatures, present a consistently positive image of Ivan IV as pious, just and competent, although the portrayal of individual events could vary. Nevertheless they also sometimes portray him as not in control of his elite, his subjects or events. If Ivan had to restore order by punishing those who had acted unjustly without his permission, then he had obviously failed to prevent such misdeeds. The miniatures in LLS present a cohesive image of the Public Ivan, despite the various stages of completion of individual segments, efforts at revision that were underway when the project was abandoned, and its fragmented preservation thereafter. However, the Illustrated Chronicle Compilation never criticizes Ivan for his failings, blaming his actions on human or non-human evil doers. Although lls idealized Ivan, it did not idealize Ivan’s reign

    Virtual reality exploration with different head-related transfer functions

    Get PDF
    One of the main challenges of spatial audio rendering in headphones is the crucial work behind the personalization of the so-called head-related transfer functions (HRTFs). HRTFs capture the listener's acoustic effects allowing a personal perception of immersion in virtual reality context. This paper aims to investigate the possible benefits of personalized HRTFs that were individually selected based on anthropometric data (pinnae shapes). Personalized audio rendering was compared to a generic HRTF and a stereo sound condition. Two studies were performed; the first study consisted of a screening test aiming to evaluate the participants' localization performance with HRTFs for a non-visible spatialized audio source. The second experiment allowed the participants to freely explore a VR scene with five audiovisual sources for two minutes each, with both HRTF and stereo conditions. A questionnaire with items for spatial audio quality, presence and attention was used for the evaluation. Results indicate that audio rendering methods made no difference on responses to the questionnaire in the two minutes of a free exploration

    Developing more authentic E-courses by integrating working life mentoring and social media

    No full text
    Studies show that affordances of social media have not yet been fully exploited in the promotion of authentic e-learning in higher education. The e-Learning of the Future project (2009-2011) has met these challenges through working life mentoring using social media. In this paper, we examine the planning and implementation of social media in nine project courses and how these changes support authentic learning. A further focus of interest is the role of working life mentors in the process. The outcomes indicate that the introduction of social media measures strongly supported the strengthening of authentic learning principles (Herrington & Oliver, 2000) on the courses. Revisions to learning tasks centred on establishing connections to expert communities, the use of blogs, and the compilation of recorded entrepreneurial narratives. Working life mentors brought an up-to-date, work-oriented perspective to the process and highlighted skills required by workplaces of the future. Developing educational tasks that cross traditional boundaries raises issues of operational culture change, the roles of partners and transparency of education, and these implications are discussed in the paper

    A sensor pairing and fusion system for a multi-user environment

    No full text
    This paper proposes a system for sensor pairing and fusion in an interactive multi user environment. Using the system, we integrated mobile accelerometer and fixed position optical tracking methods, and implemented two active music listening ap plications based on the movement interaction from one or more users. We found that an acceleration domain similarity index between the two tracking methods is able to pair the raw interac tion streams in near real time, and that concurrent sampling of the streams allows for easy sensor fusion. Although these al gorithms still require further refinement, we believe that combin ing accurate position with accurate acceleration data is beneficial for novel interactive applications

    Worlds of information:Designing for engagement at a public multi-touch display

    No full text
    corecore