60 research outputs found

    Sonic Interactions in Virtual Environments

    Get PDF
    This open access book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    Sonic Interactions in Virtual Environments

    Get PDF
    This open access book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    Sonic Interactions in Virtual Environments

    Get PDF

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Quality of experience in telemeetings and videoconferencing: a comprehensive survey

    Get PDF
    Telemeetings such as audiovisual conferences or virtual meetings play an increasingly important role in our professional and private lives. For that reason, system developers and service providers will strive for an optimal experience for the user, while at the same time optimizing technical and financial resources. This leads to the discipline of Quality of Experience (QoE), an active field originating from the telecommunication and multimedia engineering domains, that strives for understanding, measuring, and designing the quality experience with multimedia technology. This paper provides the reader with an entry point to the large and still growing field of QoE of telemeetings, by taking a holistic perspective, considering both technical and non-technical aspects, and by focusing on current and near-future services. Addressing both researchers and practitioners, the paper first provides a comprehensive survey of factors and processes that contribute to the QoE of telemeetings, followed by an overview of relevant state-of-the-art methods for QoE assessment. To embed this knowledge into recent technology developments, the paper continues with an overview of current trends, focusing on the field of eXtended Reality (XR) applications for communication purposes. Given the complexity of telemeeting QoE and the current trends, new challenges for a QoE assessment of telemeetings are identified. To overcome these challenges, the paper presents a novel Profile Template for characterizing telemeetings from the holistic perspective endorsed in this paper

    Audio for Virtual, Augmented and Mixed Realities: Proceedings of ICSA 2019 ; 5th International Conference on Spatial Audio ; September 26th to 28th, 2019, Ilmenau, Germany

    Get PDF
    The ICSA 2019 focuses on a multidisciplinary bringing together of developers, scientists, users, and content creators of and for spatial audio systems and services. A special focus is on audio for so-called virtual, augmented, and mixed realities. The fields of ICSA 2019 are: - Development and scientific investigation of technical systems and services for spatial audio recording, processing and reproduction / - Creation of content for reproduction via spatial audio systems and services / - Use and application of spatial audio systems and content presentation services / - Media impact of content and spatial audio systems and services from the point of view of media science. The ICSA 2019 is organized by VDT and TU Ilmenau with support of Fraunhofer Institute for Digital Media Technology IDMT

    Learning to see and hear in 3D: Virtual reality as a platform for multisensory perceptual learning

    Get PDF
    Virtual reality (VR) is an emerging technology which allows for the presentation of immersive and realistic yet tightly controlled audiovisual scenes. In comparison to conventional displays, the VR system can include depth, 3D audio, fully integrated eye, head, and hand tracking, all over a much larger field of view than a desktop monitor provides. These properties demonstrate great potential for use in vision science experiments, especially those that can benefit from more naturalistic stimuli, particularly in the case of visual rehabilitation. Prior work using conventional displays has demonstrated that that visual loss due to stroke can be partially rehabilitated through laboratory-based tasks designed to promote long-lasting changes to visual sensitivity. In this work, I will explore how VR can provide a platform for new, more complex training paradigms which leverage multisensory stimuli. In this dissertation, I will (I) provide context to motivate the use of multisensory perceptual training in the context of visual rehabilitation, (II) demonstrate best practices for the appropriate use of VR in a controlled psychophysics setting, (III) describe a prototype integrated hardware system for improved eye tracking in VR, and (IV, V) discuss results from two audiovisual perceptual training studies, one using multisensory stimuli and the other with cross-modal audiovisual stimuli. This dissertation provides the foundation for future work in rehabilitating visual deficits, by both improving the hardware and software systems used to present the training paradigm as well as validating new techniques which use multisensory training not previously accessible with conventional desktop displays

    VR-based Soundscape Evaluation: Auralising the Sound from Audio Rendering, Reflection Modelling to Source Synthesis in the Acoustic Environment

    Get PDF
    Soundscape has been growing as a research field associated with acoustics, urban planning, environmental psychology and other disciplines since it was first introduced in the 1960s. To assess soundscapes, subjective validation is frequently integrated with soundscape reproduction. However, the existing soundscape standards do not give clear reproduction specifications to recreate a virtual sound environment. Selecting appropriate audio rendering methods, simulating sound propagation, and synthesising non-point sound sources remain major challenges for researchers. This thesis therefore attempts to give alternative or simplified strategies to reproduce a virtual sound environment by suggesting binaural or monaural audio renderings, reflection modelling during sound propagation, and less synthesis points of non-point sources. To solve these unclear issues, a systematic review of original studies first examines the ecological validity of immersive virtual reality in soundscape evaluation. Through recording and reproducing audio-visual stimuli of sound environments, participants give their subjective responses according to the structured questionnaires. Thus, different audio rendering, reflection modelling, and source synthesis methods are validated by subjective evaluation. The results of this thesis reveal that a rational setup of VR techniques and evaluation methods will be a solid foundation for soundscape evaluation with reliable ecological validity. For soundscape audio rendering, the binaural rendering still dominates the soundscape evaluation compared with the monaural. For sound propagation with consideration of different reflection conditions, fewer orders can be employed during sound reflection to assess different kinds of sounds in outdoor sound environments through VR experiences. The VR experience combining both HMDs and Ambisonics will significantly strengthen our immersion at low orders. For non-point source synthesis, especially line sources, when adequate synthesis points reach the threshold of the minimum audible angle, human ears cannot distinguish the location of the synthesised sound sources in the horizontal plane, thus increasing immersion significantly. These minimum specifications and simplifications refine the understanding of soundscape reproduction, and the findings will be beneficial for researchers and engineers in determining appropriate audio rendering, sound propagation modelling, and non-point source synthesis strategies
    • …
    corecore