28 research outputs found

    The blue-c Distributed Scene Graph

    Get PDF
    In this paper we present a distributed scene graph architecture for use in the blue-c, a novel collaborative immersive virtual environment. We extend the widely used OpenGL Performer toolkit to provide a distributed scene graph maintaining full synchronization down to vertex and texel level. We propose a synchronization scheme including customizable, relaxed locking mechanisms. We demonstrate the functionality of our toolkit with two prototype applications in our high-performance virtual reality and visual simulation environment

    blue-c Räumlich-immersive Projektions- und 3D Video-Portale für Telepräsenz. Teil 1: Konzept, Hardware und Videoverarbeitung

    Get PDF
    Zusammenfassung: Der folgende Beitrag beschreibt das Design und die Architektur von blue-c, einem Polyprojekt der ETH Zürich, an dem über einen Zeitraum von 4 Jahren bis zu 20 wissenschaftliche Mitarbeiter mitgewirkt haben und insgesamt 9 Doktorarbeiten entstanden sin

    3D video fragments: dynamic point samples for real-time free-viewpoint video

    No full text
    We present 3D video fragments, a dynamic point sample framework for real-time free-viewpoint video. By generalizing 2D video pixels towards 3D irregular point samples we combine the simplicityof conventional 2D video processing with the power of more complex polygonal representations for free-viewpoint video. We propose a differential update scheme exploiting the spatio-temporal coherence of the video streams of multiple cameras. Updates are issued byoperators such as inserts and deletes accounting for changes in the input video images. The operators from multiple cameras are processed, merged into a 3D video stream and transmitted to a remote site. We also introduce a novel concept for camera control which dynamically selects the set of relevant cameras for reconstruction. Moreover, it adapts to the processing load and rendering platform. Our framework is generic in the sense that it works with anyrealtime 3D reconstruction method which extracts depth from images. The video renderer displays free-viewpoint videos using an efficient point-based splatting scheme and makes use of state-of-the-art vertex and pixel processing hardware for real-time visual processing

    Real-time streaming of point-based 3D video

    No full text
    Free-viewpoint video is a promising technology for next-generation virtual and augmented reality applications. Our goal is to enhance collaborative VR applications with 3D video-conferencing features. In this paper, we propose a 3D video streaming technique which can be deployed in telepresence environments. The streaming characteristics of real-time 3D video sequences are investigated under various system and networking conditions. We introduce several encoding techniques and analyze their behavior with respect to resolution, bandwidth and inter-frame jitter. Our 3D video pipeline uses point samples as basic primitives and is fully integrated with a communication framework handling acknowledgment information for reliable network transmissions and application control data. The 3D video reconstruction process dynamically adapts to processing and networking bottlenecks. Our results show that a reliable transmission of our pixel-based differential prediction encoding leads to the best performance in terms of bandwidth, but is also quite sensitive to packet losses. A redundantly encoded stream achieves better results in presence of burst losses and seamlessly adapts to varying network throughput. 1

    ABSTRACT DYNAMIC POINT SAMPLES FOR FREE-VIEWPOINT VIDEO

    No full text
    a) b) c) Figure 1: Examples of dynamic point samples for free-viewpoint video. a) real-time free-viewpoint video in the blue-c, b) 3D video recorder with object-space compression, c) 3D video recorder with image-space free-viewpoint video. These results were recorded in various acquisition prototype setups. Free-viewpoint video (FVV) uses multiple video streams to rerender a time-varying scene from arbitrary viewpoints. FVV enables free navigation with respect to time and space in streams of visual data and allows for virtual replays and for freeze-androtate effects, for instance. Moreover, FVV technology can improve communication between remote participants in high-end telepresence applications. In combination with spatial-immersive projection environments, 3D video conferencing allows for lifesize, three-dimensional representations of the users instead of small, flat video images. In this paper we propose the application of dynamic point samples as primitives for FVV by generalizing 2D video pixels towards 3D irregular point samples. The different attributes of the point samples define appearance and geometry of surfaces in a unified manner. Furthermore, by storing the reference to a pixel in an input video camera efficient coding and compression schemes can be employed for FVV. We show two different systems for free-viewpoint video using this primitive, namely a real-time FVV system employed in a high-end telecollaboration system and a FVV recording system using two different representations and coding schemes. We evaluate performance and quality of the presented systems and algorithms using the blue-c system with its portals. Both presented 3D video systems are integrated into the telepresence software system of the blue-c, ready to use and demonstrable.
    corecore