180 research outputs found
Layered depth images
In this paper we present a set of efficient image based rendering methods capable of rendering multiple frames per second on a PC. The first method warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques. A second method for more general scenes performs warping from an intermediate representation called a Layered Depth Image (LDI). An LDI is a view of the scene from a single input camera view, but with multiple pixels along each line of sight. The size of the representation grows only linearly with the observed depth complexity in the scene. Moreover, because the LDI data are represented in a single image coordinate system, McMillan's warp ordering algorithm can be successfully adapted. As a result, pixels are drawn in the output image in back-to-front order. No z-buffer is required, so alpha-compositing can be done efficiently without depth sorting. This makes splatting an efficient solution to the resampling problem.Engineering and Applied Science
An efficient multi-resolution framework for high quality interactive rendering of massive point clouds using multi-way kd-trees
We present an efficient technique for out-of-core multi-resolution construction and high quality interactive visualization of massive point clouds. Our approach introduces a novel hierarchical level of detail (LOD) organization based on multi-way kd-trees, which simplifies memory management and allows control over the LOD-tree height. The LOD tree, constructed bottom up using a fast high-quality point simplification method, is fully balanced and contains all uniformly sized nodes. To this end, we introduce and analyze three efficient point simplification approaches that yield a desired number of high-quality output points. For constant rendering performance, we propose an efficient rendering-on-a-budget method with asynchronous data loading, which delivers fully continuous high quality rendering through LOD geo-morphing and deferred blending. Our algorithm is incorporated in a full end-to-end rendering system, which supports both local rendering and cluster-parallel distributed rendering. The method is evaluated on complex models made of hundreds of millions of point sample
Repurpose 2D Animations for a VR Environment using BDH Shape Interpolation
Virtual Reality technology has spread rapidly in recent years. However, its growth risks ending soon due to the absence of quality content, except for few exceptions. We present an original framework that allows artists to use 2D characters and animations in a 3D Virtual Reality environment, in order to give an easier access to the production of content for the platform. In traditional platforms, 2D animation represents a more economic and immediate alternative to 3D. The challenge in adapting 2D characters to a 3D environment is to interpret the missing
depth information. A 2D character is actually flat, so there is not any depth information, and every body part is at the same level of the others. We exploit mesh interpolation, billboarding and parallax scrolling to simulate the depth between each body segment of the character. We have developed a prototype of the system, and extensive tests with a 2D animation production show the effectiveness of our framework
Repurpose 2D Character Animations for a VR Environment Using BDH Shape Interpolation.
Virtual Reality technology has spread rapidly in recent years.
However, its growth risks ending soon due to the absence of quality content, except for few exceptions. We present an original framework that allows artists to use 2D characters and animations in a 3D Virtual Reality environment, in order to give an easier access to the production of content for the platform. In traditional platforms, 2D animation represents a more economic and immediate alternative to 3D. The challenge in adapting 2D characters to a 3D environment is to interpret the missing depth information. A 2D character is actually flat, so there is not any depth information, and every body part is at the same level of the others. We exploit mesh interpolation, billboarding and parallax scrolling to simulate the depth between each body segment of the character. We have developed a prototype of the system, and extensive tests with a 2D animation production show the effectiveness of our framework
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
10411 Abstracts Collection -- Computational Video
From 10.10.2010 to 15.10.2010, the Dagstuhl Seminar 10411 ``Computational Video \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Game engines selection framework for high-fidelity serious applications
Serious games represent the state-of-the-art in the convergence of electronic gaming technologies with instructional design principles and pedagogies. Despite the value of high-fidelity content in engaging learners and providing realistic training environments, building games which deliver high levels of visual and functional realism is a complex, time consuming and expensive process. Therefore, commercial game engines, which provide a development environment and resources to more rapidly create high-fidelity virtual worlds, are increasingly used for serious as well as for entertainment applications. Towards this intention, the authors propose a new framework for the selection of game engines for serious applications and sets out five elements for analysis of engines in order to create a benchmarking approach to the validation of game engine selection. Selection criteria for game engines and the choice of platform for Serious Games are substantially different from entertainment games, as Serious Games have very different objectives, emphases and technical requirements. In particular, the convergence of training simulators with serious games, made possible by increasing hardware rendering capacity is enabling the creation of high-fidelity serious games, which challenge existing instructional approaches. This paper overviews several game engines that are suitable for high-fidelity serious games, using the proposed framework
Survey of image-based representations and compression techniques
In this paper, we survey the techniques for image-based rendering (IBR) and for compressing image-based representations. Unlike traditional three-dimensional (3-D) computer graphics, in which 3-D geometry of the scene is known, IBR techniques render novel views directly from input images. IBR techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative techniques. IBR techniques demonstrate a surprising diverse range in their extent of use of images and geometry in representing 3-D scenes. We explore the issues in trading off the use of images and geometry by revisiting plenoptic-sampling analysis and the notions of view dependency and geometric proxies. Finally, we highlight compression techniques specifically designed for image-based representations. Such compression techniques are important in making IBR techniques practical.published_or_final_versio
- …