53 research outputs found

    Time-varying volume visualization

    Get PDF
    Volume rendering is a very active research field in Computer Graphics because of its wide range of applications in various sciences, from medicine to flow mechanics. In this report, we survey a state-of-the-art on time-varying volume rendering. We state several basic concepts and then we establish several criteria to classify the studied works: IVR versus DVR, 4D versus 3D+time, compression techniques, involved architectures, use of parallelism and image-space versus object-space coherence. We also address other related problems as transfer functions and 2D cross-sections computation of time-varying volume data. All the papers reviewed are classified into several tables based on the mentioned classification and, finally, several conclusions are presented.Preprin

    Photon Splatting Using a View-Sample Cluster Hierarchy

    Get PDF
    Splatting photons onto primary view samples, rather than gathering from a photon acceleration structure, can be a more efficient approach to evaluating the photon-density estimate in interactive applications, where the number of photons is often low compared to the number of view samples. Most photon splatting approaches struggle with large photon radii or high resolutions due to overdraw and insufficient culling. In this paper, we show how dynamic real-time diffuse interreflection can be achieved by using a full 3D acceleration structure built over the view samples and then splatting photons onto the view samples by traversing this data structure. Full dynamic lighting and scenes are possible by tracing and splatting photons, and rebuilding the acceleration structure every frame. We show that the number of view-sample/photon tests can be significantly reduced and suggest further culling techniques based on the normal cone of each node in the hierarchy. Finally, we present an approximate variant of our algorithm where photon traversal is stopped at a fixed level of our hierarchy, and the incoming radiance is accumulated per node and direction, rather than per view sample. This improves performance significantly with little visible degradation of quality

    A Deferred Shading Pipeline for Real-Time Indirect Illumination

    Get PDF
    CD Rom - Talk session: Split Second Screen SpaceInternational audienceWe present a deferred shading algorithm for computing screen-space multi-bounce indirect illumination with visibility, in real time. For each frame, we compute mipmapped G-Buffers of depth, normals, illumination and voxelized geometry. To each mipmap level we apply a single shader that gathers screen-space illumination using local Monte-Carlo integration. We upsample the illumination for all levels and smoothly combine them together. Our calculation is approximate but does not show artifacts, because it relies on noise-free Monte-Carlo integration of incoming illumination and temporal filtering. Our method simulates arbitrary distant illumination including visibility at a very low cost, because we only perform local texture lookups during computation. Besides, its deferred shading nature makes it independent of geometric and lighting complexity

    Novel illumination algorithms for off-line and real-time rendering

    Get PDF
    This thesis presents new and efficient illumination algorithms for off-line and real-time rendering. The realistic rendering of arbitrary indirect illumination is a difficult task. Assuming ray optics model of light, the rendering equation describes the propagation of light in the scene with high accuracy. However, the computation is expensive, and thus even in off-line rendering, i.e., in prerendered animations, indirect illumination is often approximated as it would otherwise constitute a bottleneck in the production pipeline. Indirect illumination can be computed using Monte Carlo integration, but when restrained to a reasonable amount of computation time, the result is often corrupted by noise. This thesis includes a method that effectively reduces the noise by applying a spatially varying filter to the noisy illumination. For real-time performance, some components of indirect illumination can be precomputed. Irradiance volume and many variations of it precompute reflections and shadowing of a static scene into a volumetric data structure. This data is then used to shade dynamic objects in real-time. The practical usage of the method is limited due to aliasing artifacts. This thesis shows that with a suitable super-sampling approach, a significant quality improvement can be obtained. Another direction is to precompute how light propagates in the scene and use the precomputed data during run-time to solve both direct and indirect illumination based on the known incident lighting. To keep the memory and precomputation costs tractable, these methods are typically restricted to infinitely distant lighting. Those that are not, require a very long precomputation time. This thesis presents an algorithm that adopts a wavelet-based hierarchical finite element method for the precomputation. A significant performance improvement over the existing techniques is obtained. When full global illumination cannot be afforded, ambient occlusion is an attractive alternative. This thesis includes two methods for real-time rendering of ambient occlusion in dynamic scenes. The first method models the shadowing of ambient light between rigid moving bodies. The second method gives a data-oriented solution for rendering approximate ambient occlusion for animated characters in real-time. Both methods achieve unprecedented efficiency.reviewe
    corecore