44 research outputs found

    A Theoretical Analysis of Compactness of the Light Transport Operator

    Get PDF
    International audienceRendering photorealistic visuals of virtual scenes requires tractable models for the simulation of light. The rendering equation describes one such model using an integral equation, the crux of which is a continuous integral operator. A majority of rendering algorithms aim to approximate the effect of this light transport operator via discretization (using rays, particles, patches, etc.). Research spanning four decades has uncovered interesting properties and intuition surrounding this operator. In this paper we analyze compactness, a key property that is independent of its discretization and which characterizes the ability to approximate the operator uniformly by a sequence of finite rank operators. We conclusively prove lingering suspicions that this operator is not compact and therefore that any discretization that relies on a finite-rank or nonadaptive finite-bases is susceptible to unbounded error over arbitrary light distributions. Our result justifies the expectation for rendering algorithms to be evaluated using a variety of scenes and illumination conditions. We also discover that its lower dimensional counterpart (over purely diffuse scenes) is not compact except in special cases, and uncover connections with it being noninvertible and acting as a low-pass filter. We explain the relevance of our results in the context of previous work. We believe that our theoretical results will inform future rendering algorithms regarding practical choices.Le rendu d'images photoréalistes de scènes virtuelles nécessite la simulation du transport lumineux. L'équation du rendu décrit un tel modèle à l'aide d'une équation intégrale, ou intervient un opérateur intégral continu. Une part significative des d'algorithmes de rendu visent à approximer l'effet de cet opérateur via une discrétisation (à l'aide de rayons, de particules, de patchs, etc.). Quatre décennies de recherches ont mis à jour des propriétés et une intuition entourant cet opérateur. Dans cet article, nous analysons sa compacité, une propriété clé qui est indépendante de la discrétisation et qui caractérise la possibilité d'approcher uniformément l'opérateur par une suite d'opérateurs de rang fini. Nous justifions les soupçons persistants que cet opérateur n'est pas compact et donc que toute discrétisation qui repose sur un rang fini ou des bases finies non adaptatives n'apporte pas de guarantie d'erreur sur des distributions de lumière arbitraires. Notre résultat justifie le besoin d'évaluer chaque méthode en utilisant une variété de scènes et de conditions d'éclairage. Nous montrons également que son homologue de dimension inférieure (sur des scènes purement diffuses) n'est pas compact sauf dans des cas particuliers, et établissons un lien avec le fait qu'il est non inversible et agit comme un filtre passe-bas. Nous expliquons la pertinence de nos résultats dans le contexte de travaux antérieurs. Nous pensons que nos résultats théoriques éclaireront les futurs algorithmes de rendu concernant les choix pratiques

    Glossy Probe Reprojection for Interactive Global Illumination

    Get PDF
    International audienceRecent rendering advances dramatically reduce the cost of global illumination. But even with hardware acceleration, complex light paths with multiple glossy interactions are still expensive; our new algorithm stores these paths in precomputed light probes and reprojects them at runtime to provide interactivity. Combined with traditional light maps for diffuse lighting our approach interactively renders all light paths in static scenes with opaque objects. Naively reprojecting probes with glossy lighting is memory-intensive, requires efficient access to the correctly reflected radiance, and exhibits problems at occlusion boundaries in glossy reflections. Our solution addresses all these issues. To minimize memory, we introduce an adaptive light probe parameterization that allocates increased resolution for shinier surfaces and regions of higher geometric complexity. To efficiently sample glossy paths, our novel gathering algorithm reprojects probe texels in a view-dependent manner using efficient reflection estimation and a fast rasterization-based search. Naive probe reprojection often sharpens glossy reflections at occlusion boundaries, due to changes in parallax. To avoid this, we split the convolution induced by the BRDF into two steps: we precompute probes using a lower material roughness and apply an adaptive bilateral filter at runtime to reproduce the original surface roughness. Combining these elements, our algorithm interactively renders complex scenes while fitting in the memory, bandwidth, and computation constraints of current hardware

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Real-time Cinematic Design Of Visual Aspects In Computer-generated Images

    Get PDF
    Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines

    Efficient Many-Light Rendering of Scenes with Participating Media

    Get PDF
    We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Acquisition and modeling of material appearance

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 131-143).In computer graphics, the realistic rendering of synthetic scenes requires a precise description of surface geometry, lighting, and material appearance. While 3D geometry scanning and modeling have advanced significantly in recent years, measurement and modeling of accurate material appearance have remained critical challenges. Analytical models are the main tools to describe material appearance in most current applications. They provide compact and smooth approximations to real materials but lack the expressiveness to represent complex materials. Data-driven approaches based on exhaustive measurements are fully general but the measurement process is difficult and the storage requirement is very high. In this thesis, we propose the use of hybrid representations that are more compact and easier to acquire than exhaustive measurement, while preserving much generality of a data-driven approach. To represent complex bidirectional reflectance distribution functions (BRDFs), we present a new method to estimate a general microfacet distribution from measured data. We show that this representation is able to reproduce complex materials that are impossible to model with purely analytical models.(cont.) We also propose a new method that significantly reduces measurement cost and time of the bidirectional texture function (BTF) through a statistical characterization of texture appearance. Our reconstruction method combines naturally aligned images and alignment-insensitive statistics to produce visually plausible results. We demonstrate our acquisition system which is able to capture intricate materials like fabrics in less than ten minutes with commodity equipments. In addition, we present a method to facilitate effective user design in the space of material appearance. We introduce a metric in the space of reflectance which corresponds roughly to perceptual measures. The main idea of our approach is to evaluate reflectance differences in terms of their induced rendered images, instead of the reflectance function itself defined in the angular domains. With rendered images, we show that even a simple computational metric can provide good perceptual spacing and enable intuitive navigation of the reflectance space.by Wai Kit Addy Ngan.Ph.D

    Optimising Spatial and Tonal Data for PDE-based Inpainting

    Full text link
    Some recent methods for lossy signal and image compression store only a few selected pixels and fill in the missing structures by inpainting with a partial differential equation (PDE). Suitable operators include the Laplacian, the biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The quality of such approaches depends substantially on the selection of the data that is kept. Optimising this data in the domain and codomain gives rise to challenging mathematical problems that shall be addressed in our work. In the 1D case, we prove results that provide insights into the difficulty of this problem, and we give evidence that a splitting into spatial and tonal (i.e. function value) optimisation does hardly deteriorate the results. In the 2D setting, we present generic algorithms that achieve a high reconstruction quality even if the specified data is very sparse. To optimise the spatial data, we use a probabilistic sparsification, followed by a nonlocal pixel exchange that avoids getting trapped in bad local optima. After this spatial optimisation we perform a tonal optimisation that modifies the function values in order to reduce the global reconstruction error. For homogeneous diffusion inpainting, this comes down to a least squares problem for which we prove that it has a unique solution. We demonstrate that it can be found efficiently with a gradient descent approach that is accelerated with fast explicit diffusion (FED) cycles. Our framework allows to specify the desired density of the inpainting mask a priori. Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED. This is exploited to achieve reconstructions with state-of-the-art quality. We also give an extensive literature survey on PDE-based image compression methods

    Sequential Monte Carlo Instant Radiosity

    Get PDF
    The focus of this thesis is to accelerate the synthesis of physically accurate images using computers. Such images are generated by simulating how light flows in the scene using unbiased Monte Carlo algorithms. To date, the efficiency of these algorithms has been too low for real-time rendering of error-free images. This limits the applicability of physically accurate image synthesis in interactive contexts, such as pre-visualization or video games. We focus on the well-known Instant Radiosity algorithm by Keller [1997], that approximates the indirect light field using virtual point lights (VPLs). This approximation is unbiased and has the characteristic that the error is spread out over large areas in the image. This low-frequency noise manifests as an unwanted 'flickering' effect in image sequences if not kept temporally coherent. Currently, the limited VPL budget imposed by running the algorithm at interactive rates results in images which may noticeably differ from the ground-truth. We introduce two new algorithms that alleviate these issues. The first, clustered hierarchical importance sampling, reduces the overall error by increasing the VPL budget without incurring a significant performance cost. It uses an unbiased Monte Carlo estimator to estimate the sensor response caused by all VPLs. We reduce the variance of this estimator with an efficient hierarchical importance sampling method. The second, sequential Monte Carlo Instant Radiosity, generates the VPLs using heuristic sampling and employs non-parametric density estimation to resolve their probability densities. As a result the algorithm is able to reduce the number of VPLs that move between frames, while also placing them in regions where they bring light to the image. This increases the quality of the individual frames while keeping the noise temporally coherent — and less noticeable — between frames. When combined, the two algorithms form a rendering system that performs favourably against traditional path tracing methods, both in terms of performance and quality. Unlike prior VPL-based methods, our system does not suffer from the objectionable lack of temporal coherence in highly occluded scenes
    corecore