22 research outputs found
Probabilistic motion sequence generation
Creating long animation sequences with non-trivial repetitions is a time consuming and often difficult task. This is true for 2D images and even more true for 3D sequences. Based upon the idea of video textures we propose a simple algorithm to create new user controlled animation sequences based only on a few key frames by the analysis of velocity and position coherence. The simplicity of the method is achieved by carrying out the calculations on the main principal components of the reference animation, hence reducing the dimensionality of the input data. This also leads to significant compression. Smooth animations are ensured, using one of the proposed blending schemes.
Data-driven local coordinate systems for image-based rendering
Image-based representations of an object profit from known geometry. The more accurate this geometry is known, the better corresponding pixels in the different images can be aligned, which leads to less artifacts and better compression performance. For opaque objects the per-pixel data can then be interpreted as a sampling of the BRDF at the respective surface point. In order to parameterize this sampled data a coordinate frame has to be defined. In previous work this coordinate frame was either the global frame or a local frame derived from the base geometry. Both approaches lead to misalignments between sample vectors: Features of basically very similar BRDFs will be shifted to different regions in the sample vector leading to poor compression performance. In order to improve alignment between the sampled BRDFs in image-based rendering, we propose an optimization algorithm which determines consistent coordinate frames for every sample point on the object surface. This way we efficiently align the features even of anisotropic reflection functions and reconstruct approximate local coordinate frames without performing an explicit 3D-reconstruction. The optimization is calculated efficiently by exploiting the Fourier-shift theorem for spherical harmonics. In order to deal with different materials in a scene, the technique is combined with a clustering algorithm. We demonstrate the utility of our method by applying it to BTFs and 6D surface reflectance fields
Procedural Editing of Bidirectional Texture Functions
... appearance but the user is currently not capable of changing this appearance in an effective and intuitive way. Such editing operations would require a low-dimensional but expressive model for appearance that exposes only a small set of intuitively editable parameters (1D-sliders, 2D-maps) to the user but preserves all visually relevant details. In this paper we present a novel editing technique for complex spatially varying materials. It is based on the observation that we are already good in modeling the basic geometric structure of many natural and manmade materials but still have not found effective models for the detailed small-scale geometry and the interaction of light with these materials. Our main idea is to use procedural geometry to define the basic structure of a material and then to enrich this structure with the BTF information captured from real materials. By employing recent algorithms for real-time texture synthesis and BTF compression our technique allows interactive editing
Eurographics Symposium on Rendering 2003 Per Christensen and Daniel Cohen-Or (Editors) Efficient and Realistic Visualization of Cloth
Efficient and realistic rendering of cloth is of great interest especially in the context of e-commerce. Aside from the simulation of cloth draping, the rendering has to provide the "look and feel " of the fabric itself. In this paper we present a novel interactive rendering algorithm to preserve this "look and feel " of different fabrics. This is done by using the bidirectional texture function (BTF) of the fabric, which is acquired from a rectangular probe and after synthesis, mapped onto the simulated geometry. Instead of fitting a special type of bidirectional reflection distribution function (BRDF) model to each texel of our BTF, we generate view-dependent texture-maps using a principal component analysis of the original data. These view-dependent texture maps are then illuminated and rendered using either point-light sources or high dynamic range environment maps by exploiting current graphics hardware. In both cases, self-shadowing caused by geometry is taken into account. For point light sources, we also present a novel method to generate smooth shadow boundaries on the geometry. Depending on the geometrical complexity and the sampling density of the environment map, the illumination can be changed interactively. To ensure interactive frame rates for denser samplings or more complex objects, we introduce a principal component based decomposition of the illumination of the geometry. The high quality of the results is demonstrated by several examples. The algorithm is also suitable for materials other than cloth, as far as these materials have a similar reflectance behavior
Per Christensen and Daniel Cohen-Or (Editors) Efficient and Realistic Visualization of Cloth
Efficient and realistic rendering of cloth is of great interest especially in the context of e-commerce. Aside from the simulation of cloth draping, the rendering has to provide the "look and feel " of the fabric itself. In this paper we present a novel interactive rendering algorithm to preserve this "look and feel " of different fabrics. This is done by using the bidirectional texture function (BTF) of the fabric, which is acquired from a rectangular probe and after synthesis, mapped onto the simulated geometry. Instead of fitting a special type of bidirectional reflection distribution function (BRDF) model to each texel of our BTF, we generate view-dependent texture-maps using a principal component analysis of the original data. These view-dependent texture maps are then illuminated and rendered using either point-light sources or high dynamic range environment maps by exploiting current graphics hardware. In both cases, self-shadowing caused by geometry is taken into account. For point light sources, we also present a novel method to generate smooth shadow boundaries on the geometry. Depending on the geometrical complexity and the sampling density of the environment map, the illumination can be changed interactively. To ensure interactive frame rates for denser samplings or more complex objects, we introduce a principal component based decomposition of the illumination of the geometry. The high quality of the results is demonstrated by several examples. The algorithm is also suitable for materials other than cloth, as far as these materials have a similar reflectance behavior. Categories and Subject Descriptors (according to AC
Exploitation of human shadow perception for fast shadow rendering
Figure 1: Visual perception of shadows. Decreasing level-of-detail for the shadow caster object from left to right. Hard shadows cast by a point light source (top row) and soft shadows cast by an area light source (bottom row) are shown. In this paper we describe an experiment to obtain information about the perceptual potential of the human visual system regarding shadow perception. Shadows play an important part for communicating spatial structures of objects to the observer. They are also essential for the overall realism of the rendered image. Unfortunately, most algorithms in computer graphics which are capable of producing realistic shadows are computationally expensive. The main idea behind the experiment is to use a simplified version of the shadow caster to generate hard and soft shadows, which would rapidly increase performance and to evaluate up to which degree a simplification is possible, without producing noticeable errors. Therefore, an experiment is performed, in which the test persons should mark the point of the just noticeable difference. First results show, that a mesh simplified to only 1 % of its original complexity is capable to cast soft shadows that satisfy 90 % of the test persons
Abstract Hardware-accelerated ambient occlusion computation
In this paper, we present a novel, hardwareaccelerated approach to compute the visibility between surface points and directional light sources. Thus, our method provides a first-order approximation of the rendering equation in graphics hardware. This is done by accumulating depth tests of vertex fragments as seen from a number of light directions. Our method does not need any preprocessing of the scene elements and introduces no memory overhead. Besides of the handling of large polygonal models, it is suitable for deformable or animated objects under time-varying high-dynamic range illumination at interactive frame rates.