28 research outputs found
The blue-c Distributed Scene Graph
In this paper we present a distributed scene graph architecture for use in the blue-c, a novel collaborative immersive virtual environment. We extend the widely used OpenGL Performer toolkit to provide a distributed scene graph maintaining full synchronization down to vertex and texel level. We propose a synchronization scheme including customizable, relaxed locking mechanisms. We demonstrate the functionality of our toolkit with two prototype applications in our high-performance virtual reality and visual simulation environment
blue-c Räumlich-immersive Projektions- und 3D Video-Portale für Telepräsenz. Teil 1: Konzept, Hardware und Videoverarbeitung
Zusammenfassung: Der folgende Beitrag beschreibt das Design und die Architektur von blue-c, einem Polyprojekt der ETH Zürich, an dem über einen Zeitraum von 4 Jahren bis zu 20 wissenschaftliche Mitarbeiter mitgewirkt haben und insgesamt 9 Doktorarbeiten entstanden sin
Recommended from our members
Unconstrained Free-Viewpoint Video Coding
In this paper, we present a coding framework addressing image-space compression for free-viewpoint video. Our framework is based on time-varying 3D point samples which represent real-world objects. The 3D point samples are obtained after a geometrical reconstruction from multiple pre-recorded video sequences and thus allow for arbitrary viewpoints during playback. The encoding of the data is performed as an off-line process and is not time-critical. The decoding however, must support for real-time rendering of the dynamic 3D data. We introduce a compression framework which encodes multiple point attributes like depth and color into progressive streams. The reference data structure is aligned on the original camera input images and thus enables for easy view-dependent decoding. A novel differential coding approach permits random access in constant time throughout the entire data set and thus enables arbitrary viewpoint trajectories in both time and space.Engineering and Applied Science
3D video fragments: dynamic point samples for real-time free-viewpoint video
We present 3D video fragments, a dynamic point sample framework for real-time free-viewpoint video. By generalizing 2D video pixels towards 3D irregular point samples we combine the simplicityof conventional 2D video processing with the power of more complex polygonal representations for free-viewpoint video. We propose a differential update scheme exploiting the spatio-temporal coherence of the video streams of multiple cameras. Updates are issued byoperators such as inserts and deletes accounting for changes in the input video images. The operators from multiple cameras are processed, merged into a 3D video stream and transmitted to a remote site. We also introduce a novel concept for camera control which dynamically selects the set of relevant cameras for reconstruction. Moreover, it adapts to the processing load and rendering platform. Our framework is generic in the sense that it works with anyrealtime 3D reconstruction method which extracts depth from images. The video renderer displays free-viewpoint videos using an efficient point-based splatting scheme and makes use of state-of-the-art vertex and pixel processing hardware for real-time visual processing
Real-time streaming of point-based 3D video
Free-viewpoint video is a promising technology for next-generation virtual and augmented reality applications. Our goal is to enhance collaborative VR applications with 3D video-conferencing features. In this paper, we propose a 3D video streaming technique which can be deployed in telepresence environments. The streaming characteristics of real-time 3D video sequences are investigated under various system and networking conditions. We introduce several encoding techniques and analyze their behavior with respect to resolution, bandwidth and inter-frame jitter. Our 3D video pipeline uses point samples as basic primitives and is fully integrated with a communication framework handling acknowledgment information for reliable network transmissions and application control data. The 3D video reconstruction process dynamically adapts to processing and networking bottlenecks. Our results show that a reliable transmission of our pixel-based differential prediction encoding leads to the best performance in terms of bandwidth, but is also quite sensitive to packet losses. A redundantly encoded stream achieves better results in presence of burst losses and seamlessly adapts to varying network throughput. 1
Recommended from our members
Interactive Multimedia Streams in Distributed Applications
Distributed multimedia applications typically handle two different types of communication: request/reply interaction for control information as well as real-time streaming data. The CORBA Audio/Video Streaming Service provides a promising framework for the efficient development of such applications. In this paper, we discuss the CORBA-based design and implementation with request to different configurations. We especially investigate delays, i.e., the latencies that occur between issuing a CORBA request and receiving the first video frame, corresponding to the new mode. Our analysis confirms that the interactive delay can be reasonably bounded for UDP and RTP. Since we do not take into account any media specific compression issues, our results help to make essential design decisions while developing interactive multimedia applications in general, involving e.g. distributed synthetic image data, or augmented and virtual reality
ABSTRACT DYNAMIC POINT SAMPLES FOR FREE-VIEWPOINT VIDEO
a) b) c) Figure 1: Examples of dynamic point samples for free-viewpoint video. a) real-time free-viewpoint video in the blue-c, b) 3D video recorder with object-space compression, c) 3D video recorder with image-space free-viewpoint video. These results were recorded in various acquisition prototype setups. Free-viewpoint video (FVV) uses multiple video streams to rerender a time-varying scene from arbitrary viewpoints. FVV enables free navigation with respect to time and space in streams of visual data and allows for virtual replays and for freeze-androtate effects, for instance. Moreover, FVV technology can improve communication between remote participants in high-end telepresence applications. In combination with spatial-immersive projection environments, 3D video conferencing allows for lifesize, three-dimensional representations of the users instead of small, flat video images. In this paper we propose the application of dynamic point samples as primitives for FVV by generalizing 2D video pixels towards 3D irregular point samples. The different attributes of the point samples define appearance and geometry of surfaces in a unified manner. Furthermore, by storing the reference to a pixel in an input video camera efficient coding and compression schemes can be employed for FVV. We show two different systems for free-viewpoint video using this primitive, namely a real-time FVV system employed in a high-end telecollaboration system and a FVV recording system using two different representations and coding schemes. We evaluate performance and quality of the presented systems and algorithms using the blue-c system with its portals. Both presented 3D video systems are integrated into the telepresence software system of the blue-c, ready to use and demonstrable.