3 research outputs found

    Mixed Realities: a live collaborative musical performance

    Get PDF
    In the presented work, a live-rendered percussionist is transformed into a virtual game character and performs a piece along digital avatars created from recordings (audio and motion-capture) of other members of an ensemble, while audience members can observe the collaborative performance through VR headsets. To create a cohesive and compelling result, the auditory expectations of the listeners need to be considered in terms of acoustic integrity between real and virtual sources and spatial impression of each avatar performer. This paper presents an overview of the workflow and motivations behind this pilot experiment of a novel musical experience, laying down the foundations for future subjective studies into collaborative music performances using virtual and augmented reality headsets. Particular focus is given to the technical challenges concerning the audio material, the perspectives of each participant role, and the qualitative impressions of musicians and audience

    Incorporating co-presence in distributed virtual music environment

    No full text
    In this paper, we present “PODIUM (POstech Distributed virtUal Music environment)”, a distributed virtual environment that allows users to participate in a shared space and play music with other participants in a collaborative manner. In addition to playing virtual instruments, users can communicate and interact in various ways to enhance the collaboration and, thus, the quality of the music played together. Musical messages are generated note by note through interaction with the keyboard, mouse, and other devices, and transmitted through an IP-multicasting network among participants. In addition to such note-level information, additional messages for visualization, and interaction are supported. Real world based visualization has been chosen, against, for instance, abstract music world based visualization, to promote “co-presence ” (e.g. recognize and interact with other players), which is deemed important for collaborative music production. In addition to the entertainment purpose, we hope that DVME will find great use in casual practice sessions for even professional performers/orchestras/bands. Since even a slight interruption in the flow of the music or out-of-synch graphics and sound would dramatically decrease utility of the system, we employ various techniques to minimize the network delay. An adapted server-client architecture and UDP’s are used to ensure fast packet deliveries and reduce the data bottleneck problem. Time-critical messages such as MIDI messages are multicasted among clients, and the less time-critical and infrequently updated messages are sent through the server. Predefined animations of avatars are invoked by interpreting the musical messages. Using the latest graphics and sound processing hardware, and by maintaining a

    Audio for Virtual, Augmented and Mixed Realities: Proceedings of ICSA 2019 ; 5th International Conference on Spatial Audio ; September 26th to 28th, 2019, Ilmenau, Germany

    Get PDF
    The ICSA 2019 focuses on a multidisciplinary bringing together of developers, scientists, users, and content creators of and for spatial audio systems and services. A special focus is on audio for so-called virtual, augmented, and mixed realities. The fields of ICSA 2019 are: - Development and scientific investigation of technical systems and services for spatial audio recording, processing and reproduction / - Creation of content for reproduction via spatial audio systems and services / - Use and application of spatial audio systems and content presentation services / - Media impact of content and spatial audio systems and services from the point of view of media science. The ICSA 2019 is organized by VDT and TU Ilmenau with support of Fraunhofer Institute for Digital Media Technology IDMT
    corecore