313 research outputs found

    An audio architecture integrating sound and live voice for virtual environments

    Get PDF
    The purpose behind this thesis was to design and implement audio system architecture, both in hardware and in software, for use in virtual environments. The hardware and software design requirements were to provide the ability to add sounds, environmental effects such as reverberation and occlusion, and live streaming voice to any virtual environment employing this architecture. Several free or open-source sound APIs were evaluated, and DirectSound3D was selected as the core component of the audio architecture. Creative Labs Environmental Audio Extensions (EAX) was integrated into the architecture to provide environmental effects such as reverberation, occlusion, obstruction, and exclusion. Voice over IP (VoIP) technology was evaluated to provide live, streaming voice to any virtual environment. DirectVoice was selected as the voice component of the architecture due to its integration with DirectSound3D . However, extremely high latency considerations with DirectVoice, and any other VoIP application or software, required further research into alternative live voice architectures for inclusion in virtual environments. Ausim3D's GoldServe Audio Localizing Audio Server System was evaluated and integrated into the hardware component of the audio architecture to provide an extremely low-latency, live, streaming voice capability.http://archive.org/details/anudiorchitectur109454977Commander, United States Naval ReserveApproved for public release; distribution is unlimited

    Distributed Networks of Listening and Sounding: 20 Years of Telematic Musicking

    Get PDF
    This paper traces a twenty-year arc of my performance and compositional practice in the medium of telematic music, focusing on a distinct approach to fostering interdependence and emergence through the integration of listening strategies, electroacoustic improvisation, pre-composed structures, blended real/virtual acoustics, networked mutual-influence, shared signal transformations, gesture-concepts and machine agencies. Communities of collaboration and exchange over this time period are discussed, which span both pre- and post-pandemic approaches to the medium that range from metaphors of immersion and dispersion to diffraction

    Acting rehearsal in collaborative multimodal mixed reality environments

    Get PDF
    This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful

    Binaural Spatialization for 3D immersive audio communication in a virtual world

    Get PDF
    Realistic 3D audio can greatly enhance the sense of presence in a virtual environment. We introduce a framework for capturing, transmitting and rendering of 3D audio in presence of other bandwidth savvy streams in a 3D Tele-immersion based virtual environment. This framework presents an efficient implementation for 3D Binaural Spatialization based on the positions of current objects in the scene, including animated avatars and on the fly reconstructed humans. We present a general overview of the framework, how audio is integrated in the system and how it can exploit the positions of the objects and room geometry to render realistic reverberations using head related transfer functions. The network streaming modules used to achieve lip-synchronization, high-quality audio frame reception, and accurate localization for binaural rendering are also presented. We highlight how large computational and networking challenges can be addressed efficiently. This represents a first step in adequate networking support for Binaural 3D Audio, useful for telepresence. The subsystem is successfully integrated with a larger 3D immersive system, with state of art capturing and rendering modules for visual data

    Peer-to-peer interactive 3D media dissemination in networked virtual environments

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Full Body Acting Rehearsal in a Networked Virtual Environment-A Case Study

    Get PDF
    In order to rehearse for a play or a scene from a movie, it is generally required that the actors are physically present at the same time in the same place. In this paper we present an example and experience of a full body motion shared virtual environment (SVE) for rehearsal. The system allows actors and directors to meet in an SVE in order to rehearse scenes for a play or a movie, that is, to perform some dialogue and blocking (positions, movements, and displacements of actors in the scene) rehearsal through a full body interactive virtual reality (VR) system. The system combines immersive VR rendering techniques as well as network capabilities together with full body tracking. Two actors and a director rehearsed from separate locations. One actor and the director were in London (located in separate rooms) while the second actor was in Barcelona. The Barcelona actor used a wide field-of-view head-tracked head-mounted display, and wore a body suit for real-time motion capture and display. The London actor was in a Cave system, with head and partial body tracking. Each actor was presented to the other as an avatar in the shared virtual environment, and the director could see the whole scenario on a desktop display, and intervene by voice commands. A video stream in a window displayed in the virtual environment also represented the director. The London participant was a professional actor, who afterward commented on the utility of the system for acting rehearsal. It was concluded that full body tracking and corresponding real-time display of all the actors' movements would be a critical requirement, and that blocking was possible down to the level of detail of gestures. Details of the implementation, actors, and director experiences are provided

    Contemporary Urban Media Art – Images of Urgency:A Curatorial Inquiry

    Get PDF

    Wearable performance

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment

    Narrowcasting operations for mobile phone CVE chatspace avatars

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.We have developed an interface for narrowcasting (selection) functions for a networked mobile device deployed in a collaborative virtual environment (CVE). Featuring a variable number of icons in a ``2.5D'' application, the interface can be used to control motion, sensitivity, and audibility of avatars in a teleconference or chatspace. The interface is integrated with other CVE clients through a ``servent'' (server/client hybrid) HTTP TCP/IP gateway, and interoperates with a heterogeneous groupware suite to interact with other clients, including stereographic panoramic browsers and spatial audio backends and speaker arrays. Novel features include mnemonic conferencing selection function keypad operations, multiply encoded graphical display of such non-mutually exclusive attributes, and explicit multipresence features
    corecore