152 research outputs found

    Neural Free-Viewpoint Relighting for Glossy Indirect Illumination

    Full text link
    Precomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. All-frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.Comment: 13 pages, 9 figures, to appear in cgf proceedings of egsr 202

    Real-time Cinematic Design Of Visual Aspects In Computer-generated Images

    Get PDF
    Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines

    A Theoretical Analysis of Compactness of the Light Transport Operator

    Get PDF
    International audienceRendering photorealistic visuals of virtual scenes requires tractable models for the simulation of light. The rendering equation describes one such model using an integral equation, the crux of which is a continuous integral operator. A majority of rendering algorithms aim to approximate the effect of this light transport operator via discretization (using rays, particles, patches, etc.). Research spanning four decades has uncovered interesting properties and intuition surrounding this operator. In this paper we analyze compactness, a key property that is independent of its discretization and which characterizes the ability to approximate the operator uniformly by a sequence of finite rank operators. We conclusively prove lingering suspicions that this operator is not compact and therefore that any discretization that relies on a finite-rank or nonadaptive finite-bases is susceptible to unbounded error over arbitrary light distributions. Our result justifies the expectation for rendering algorithms to be evaluated using a variety of scenes and illumination conditions. We also discover that its lower dimensional counterpart (over purely diffuse scenes) is not compact except in special cases, and uncover connections with it being noninvertible and acting as a low-pass filter. We explain the relevance of our results in the context of previous work. We believe that our theoretical results will inform future rendering algorithms regarding practical choices.Le rendu d'images photoréalistes de scènes virtuelles nécessite la simulation du transport lumineux. L'équation du rendu décrit un tel modèle à l'aide d'une équation intégrale, ou intervient un opérateur intégral continu. Une part significative des d'algorithmes de rendu visent à approximer l'effet de cet opérateur via une discrétisation (à l'aide de rayons, de particules, de patchs, etc.). Quatre décennies de recherches ont mis à jour des propriétés et une intuition entourant cet opérateur. Dans cet article, nous analysons sa compacité, une propriété clé qui est indépendante de la discrétisation et qui caractérise la possibilité d'approcher uniformément l'opérateur par une suite d'opérateurs de rang fini. Nous justifions les soupçons persistants que cet opérateur n'est pas compact et donc que toute discrétisation qui repose sur un rang fini ou des bases finies non adaptatives n'apporte pas de guarantie d'erreur sur des distributions de lumière arbitraires. Notre résultat justifie le besoin d'évaluer chaque méthode en utilisant une variété de scènes et de conditions d'éclairage. Nous montrons également que son homologue de dimension inférieure (sur des scènes purement diffuses) n'est pas compact sauf dans des cas particuliers, et établissons un lien avec le fait qu'il est non inversible et agit comme un filtre passe-bas. Nous expliquons la pertinence de nos résultats dans le contexte de travaux antérieurs. Nous pensons que nos résultats théoriques éclaireront les futurs algorithmes de rendu concernant les choix pratiques

    Master-element vector irradiance for large tessellated models

    Get PDF
    http://portal.acm.org/We propose a new global light simulation method for diffuse (or moderately glossy) scenes comprising highly tesselated models with simple topology (e.g., scanned meshes). By using the topological coherence of the surface, we show how to extend a classic Finite Element method called the Master Element: We generalize this method to efficiently handle tessellated models by using mesh parameterization and mesh extrapolation techniques. In addition, we propose a high-order and hierarchical extension of the Master Element method. Our method computes a compact representation of vector irradiance, represented by high-order wavelet bases. For totally diffuse scenes, the so-computed vector irradiance maps can be transformed into light maps. For moderatly glossy scenes, approximated view-dependent lighting can be computed and displayed in real-time by the GPU from the vector irradiance maps. Using our methods, view-dependent solutions for scenes with over one million polygons are computed in minutes and displayed in real time. As with clustering methods, the time complexity of the method is independent on the number of polygons. By efficiently capturing the lighting signal at a suitable scale, the method is made independent of the geometric discretization and solely depends on the lighting complexity. We demonstrate our method in various settings, with both sharp and soft shadows accurately represented by our hierarchical function basis

    Importance driven environment map sampling

    Get PDF
    In this paper we present an automatic and efficient method for supporting Image Based Lighting (IBL) for bidirectional methods which improves both the sampling of the environment, and the detection and sampling of important regions of the scene, such as windows and doors. These often have a small area proportional to that of the entire scene, so paths which pass through them are generated with a low probability. The method proposed in this paper improves this by taking into account view importance, and modifies the lighting distribution to use light transport information. This also automatically constructs a sampling distribution in locations which are relevant to the camera position, thereby improving sampling. Results are presented when our method is applied to bidirectional rendering techniques, in particular we show results for Bidirectional Path Tracing, Metropolis Light Transport and Progressive Photon Mapping. Efficiency results demonstrate speed up of orders of magnitude (depending on the rendering method used), when compared to other methods

    Phase-shifting Haar Wavelets For Image-based Rendering Applications

    Get PDF
    In this thesis, we establish the underlying research background necessary for tackling the problem of phase-shifting in the wavelet transform domain. Solving this problem is the key to reducing the redundancy and huge storage requirement in Image-Based Rendering (IBR) applications, which utilize wavelets. Image-based methods for rendering of dynamic glossy objects do not truly scale to all possible frequencies and high sampling rates without trading storage, glossiness, or computational time, while varying both lighting and viewpoint. This is due to the fact that current approaches are limited to precomputed radiance transfer (PRT), which is prohibitively expensive in terms of memory requirements when both lighting and viewpoint variation are required together with high sampling rates for high frequency lighting of glossy material. At the root of the above problem is the lack of a closed-form run-time solution to the nontrivial problem of rotating wavelets, which we solve in this thesis. We specifically target Haar wavelets, which provide the most efficient solution to solving the tripleproduct integral, which in turn is fundamental to solving the environment lighting problem. The problem is divided into three main steps, each of which provides several key theoretical contributions. First, we derive closed-form expressions for linear phase-shifting in the Haar domain for one-dimensional signals, which can be generalized to N-dimensional signals due to separability. Second, we derive closed-form expressions for linear phase-shifting for two-dimensional signals that are projected using the non-separable Haar transform. For both cases, we show that the coefficients of the shifted data can be computed solely by using the coefficients of the original data. We also derive closed-form expressions for non-integer shifts, which has not been reported before. As an application example of these results, we apply the new formulae to image shifting, rotation and interpolation, and demonstrate the superiority of the proposed solutions to existing methods. In the third step, we establish a solution for non-linear phase-shifting of two-dimensional non-separable Haar-transformed signals, which is directly applicable to the original problem of image-based rendering. Our solution is the first attempt to provide an analytic solution to the difficult problem of rotating wavelets in the transform domain

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Wavelets In Real-time Rendering

    Get PDF
    Interactively simulating visual appearance of natural objects under natural illumination is a fundamental problem in computer graphics. 3D computer games, geometry modeling, training and simulation, electronic commerce, visualization, lighting design, digital libraries, geographical information systems, economic and medical image processing are typical candidate applications. Recent advances in graphics hardware have enabled real-time rasterization of complex scenes under artificial lighting environment. Meanwhile, pre-computation based soft shadow algorithms are proven effective under low-frequency lighting environment. Under the most practical yet popular all-frequency natural lighting environment, however, real-time rendering of dynamic scenes still remains a challenging problem. In this dissertation, we propose a systematic approach to render dynamic glossy objects under the general all-frequency lighting environment. In our framework, lighting integration is reduced to two rather basic mathematical operations, efficiently computing multi-function product and product integral. The main contribution of our work is a novel mathematical representation and analysis of multi-function product and product integral in the wavelet domain. We show that, multi-function product integral in the primal is equivalent to summation of the product of basis coefficients and integral coefficients. In the dissertation, we give a novel Generalized Haar Integral Coefficient Theorem. We also present a set of efficient algorithms to compute multi-function product and product integral. In the dissertation, we demonstrate practical applications of these algorithms in the interactive rendering of dynamic glossy objects under distant time-variant all-frequency environment lighting with arbitrary view conditions. At each vertex, the shading integral is formulated as the product integral of multiple operand functions. By approximating operand functions in the wavelet domain, we demonstrate rendering dynamic glossy scenes interactively, which is orders of magnitude faster than previous work. As an important enhancement to the popular Pre-computation Based Radiance Transfer (PRT) approach, we present a novel Just-in-time Radiance Transfer (JRT) technique, and demonstrate its application in real-time realistic rendering of dynamic all-frequency shadows under general lighting environment. Our work is a significant step towards real-time rendering of arbitrary scenes under general lighting environment. It is also of great importance to general numerical analysis and signal processing
    corecore