438 research outputs found

    Fast Rendering of Forest Ecosystems with Dynamic Global Illumination

    Get PDF
    Real-time rendering of large-scale, forest ecosystems remains a challenging problem, in that important global illumination effects, such as leaf transparency and inter-object light scattering, are difficult to capture, given tight timing constraints and scenes that typically contain hundreds of millions of primitives. We propose a new lighting model, adapted from a model previously used to light convective clouds and other participating media, together with GPU ray tracing, in order to achieve these global illumination effects while maintaining near real-time performance. The lighting model is based on a lattice-Boltzmann method in which reflectance, transmittance, and absorption parameters are taken from measurements of real plants. The lighting model is solved as a preprocessing step, requires only seconds on a single GPU, and allows dynamic lighting changes at run-time. The ray tracing engine, which runs on one or multiple GPUs, combines multiple acceleration structures to achieve near real-time performance for large, complex scenes. Both the preprocessing step and the ray tracing engine make extensive use of NVIDIA\u27s Compute Unified Device Architecture (CUDA)

    Real-time smoke rendering using compensated ray marching

    Full text link
    We present a real-time algorithm called compensated ray march-ing for rendering of smoke under dynamic low-frequency environ-ment lighting. Our approach is based on a decomposition of the input smoke animation, represented as a sequence of volumetric density fields, into a set of radial basis functions (RBFs) and a se-quence of residual fields. To expedite rendering, the source radi-ance distribution within the smoke is computed from only the low-frequency RBF approximation of the density fields, since the high-frequency residuals have little impact on global illumination under low-frequency environment lighting. Furthermore, in computing source radiances the contributions from single and multiple scatter-ing are evaluated at only the RBF centers and then approximated at other points in the volume using an RBF-based interpolation. A slice-based integration of these source radiances along each view ray is then performed to render the final image. The high-frequency residual fields, which are a critical component in the local appear-ance of smoke, are compensated back into the radiance integral dur-ing this ray march to generate images of high detail. The runtime algorithm, which includes both light transfer simula-tion and ray marching, can be easily implemented on the GPU, and thus allows for real-time manipulation of viewpoint and lighting, as well as interactive editing of smoke attributes such as extinction cross section, scattering albedo, and phase function. Only moderate preprocessing time and storage is needed. This approach provides the first method for real-time smoke rendering that includes sin-gle and multiple scattering while generating results comparable in quality to offline algorithms like ray tracing

    Flux-Limited Diffusion for Multiple Scattering in Participating Media

    Full text link
    For the rendering of multiple scattering effects in participating media, methods based on the diffusion approximation are an extremely efficient alternative to Monte Carlo path tracing. However, in sufficiently transparent regions, classical diffusion approximation suffers from non-physical radiative fluxes which leads to a poor match to correct light transport. In particular, this prevents the application of classical diffusion approximation to heterogeneous media, where opaque material is embedded within transparent regions. To address this limitation, we introduce flux-limited diffusion, a technique from the astrophysics domain. This method provides a better approximation to light transport than classical diffusion approximation, particularly when applied to heterogeneous media, and hence broadens the applicability of diffusion-based techniques. We provide an algorithm for flux-limited diffusion, which is validated using the transport theory for a point light source in an infinite homogeneous medium. We further demonstrate that our implementation of flux-limited diffusion produces more accurate renderings of multiple scattering in various heterogeneous datasets than classical diffusion approximation, by comparing both methods to ground truth renderings obtained via volumetric path tracing.Comment: Accepted in Computer Graphics Foru

    Local and Global Illumination in the Volume Rendering Integral

    Get PDF

    BSDF Importance Baking: A Lightweight Neural Solution to Importance Sampling General Parametric BSDFs

    Full text link
    Parametric Bidirectional Scattering Distribution Functions (BSDFs) are pervasively used because of their flexibility to represent a large variety of material appearances by simply tuning the parameters. While efficient evaluation of parametric BSDFs has been well-studied, high-quality importance sampling techniques for parametric BSDFs are still scarce. Existing sampling strategies either heavily rely on approximations, resulting in high variance, or solely perform sampling on a portion of the whole BSDF slice. Moreover, many of the sampling approaches are specifically paired with certain types of BSDFs. In this paper, we seek an efficient and general way for importance sampling parametric BSDFs. We notice that the nature of importance sampling is the mapping between a uniform distribution and the target distribution. Specifically, when BSDF parameters are given, the mapping that performs importance sampling on a BSDF slice can be simply recorded as a 2D image that we name as importance map. Following this observation, we accurately precompute the importance maps using a mathematical tool named optimal transport. Then we propose a lightweight neural network to efficiently compress the precomputed importance maps. In this way, we have brought parametric BSDF important sampling to the precomputation stage, avoiding heavy runtime computation. Since this process is similar to light baking where a set of images are precomputed, we name our method importance baking. Together with a BSDF evaluation network and a PDF (probability density function) query network, our method enables full multiple importance sampling (MIS) without any revision to the rendering pipeline. Our method essentially performs perfect importance sampling. Compared with previous methods, we demonstrate reduced noise levels on rendering results with a rich set of appearances
    • …
    corecore