74 research outputs found

    Auxiliary Features-Guided Super Resolution for Monte Carlo Rendering

    Get PDF
    This paper investigates super-resolution to reduce the number of pixels to render and thus speed up Monte Carlo rendering algorithms. While great progress has been made to super-resolution technologies, it is essentially an ill-posed problem and cannot recover high-frequency details in renderings. To address this problem, we exploit high-resolution auxiliary features to guide super-resolution of low-resolution renderings. These high-resolution auxiliary features can be quickly rendered by a rendering engine and at the same time provide valuable high-frequency details to assist super-resolution. To this end, we develop a cross-modality transformer network that consists of an auxiliary feature branch and a low-resolution rendering branch. These two branches are designed to fuse high-resolution auxiliary features with the corresponding low-resolution rendering. Furthermore, we design Residual Densely Connected Swin Transformer groups to learn to extract representative features to enable high-quality super-resolution. Our experiments show that our auxiliary features-guided super-resolution method outperforms both super-resolution methods and Monte Carlo denoising methods in producing high-quality renderings

    Adversarial Monte Carlo Denoising with Conditioned Auxiliary Feature Modulation

    Get PDF
    Denoising Monte Carlo rendering with a very low sample rate remains a major challenge in the photo-realistic rendering research. Many previous works, including regression-based and learning-based methods, have been explored to achieve better rendering quality with less computational cost. However, most of these methods rely on handcrafted optimization objectives, which lead to artifacts such as blurs and unfaithful details. In this paper, we present an adversarial approach for denoising Monte Carlo rendering. Our key insight is that generative adversarial networks can help denoiser networks to produce more realistic high-frequency details and global illumination by learning the distribution from a set of high-quality Monte Carlo path tracing images. We also adapt a novel feature modulation method to utilize auxiliary features better, including normal, albedo and depth. Compared to previous state-of-the-art methods, our approach produces a better reconstruction of the Monte Carlo integral from a few samples, performs more robustly at different sample rates, and takes only a second for megapixel images

    Deep-learning the Latent Space of Light Transport

    Get PDF
    We suggest a method to directly deep‐learn light transport, i. e., the mapping from a 3D geometry‐illumination‐material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi‐transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two‐stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D‐2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage‐operator serves as a valuable extension to modern deferred shading approaches

    A Dataset for Training and Testing Rendering Methods

    Get PDF
    Thorough physically-based rendering is a computationally taxing and time-consuming process. High-quality rendering requires a significant amount of memory and time to calculate the trajectories and colors of each ray that goes into a pixel, especially as scenes become more complex and more computations are required for clear renders. Each ray used to calculate the color of a pixel requires a large number of calculations to accurately represent the red-green-blue value, taking into account not only the material of the object the ray hits but the surrounding objects and lighting. In a bid to implement this process, Monte Carlo Rendering was developed as an algorithm to flexibly and realistically render an image from a three dimensional scene. This is a common method of rendering now, but it leaves significant flaws in the image if used to truly speed up the process due to the use of random sampling to determine which rays are created and approximate the pixel values. These flaws appear similar to TV static and are called noise. To keep this faster use of the algorithm while continuing to produce high-quality renders, denoising algorithms were developed to take the noisy images rendered by the Monte Carlo algorithm and scrub the noise to recreate a clean version of the render. To be efficient, these denoising algorithms need to be able to avoid using enough memory and time to fully offset the time saved by the low-sample rendering method. This is very difficult, as the accuracy in a render originally came from careful and thorough calculations for each pixel, and so these algorithms are forced to develop an alternate method to fill in the gaps using the approximations done in the noisy image. As this occurred, denoising algorithms evolved to use increasingly elaborate neural networks to negate issues in accuracy and clarity found in previous methods. These neural networks, though essential to faster and cheaper rendering, require extensive training from currently limited datasets. This research aims to expand the pool of data available to test denoising algorithms on renders created from three dimensional scenes as well as test its effectiveness in training current denoising algorithms in conjunction with the current data. The accuracy of the final tests were measured using the mean square error and the peak signal-to-noise ratio, both commonly utilized to objectively evaluate the difference in the control image and the output of the algorithm. It was found that the new data, when used in combination with the current datasets, was effective in improving the results of these algorithms. This supports the idea that larger, and more importantly more diverse, data with distinct characteristics is beneficial for creating increasingly effective denoising algorithms. This is especially true as renderers become more efficient and methods of expressing sundry real world visual phenomena become more accurate. With those improvements, more robust denoising neural networks will be necessary to create professional appearing renders. An extended dataset will allow for neural networks to be trained as accurately as possible to quickly and accurately create renders that can be used for high importance products, such as frames of final versions of movies for animation studios

    Learning Sample-Based Monte Carlo Denoising from Noisy Training Data

    Get PDF
    Monte Carlo rendering allows for the production of high-quality photorealistic images of 3D scenes. However, producing noise-free images can take a considerable amount of compute resources. To lessen this burden and speed up the rendering process while maintaining similar quality, a lower-sample count image can be rendered and then denoised after rendering with image-space denoising methods. These methods are widely used in industry, and have recently enabled advancements in areas such as real-time ray tracing. While hand-tuned denoisers are available, the most successful denoising methods are based on machine learning with deep convolutional neural networks (CNNs). These denoisers are trained on large datasets of rendered images, consisting of pairs of low-sample count noisy images and the corresponding high-sample count reference images. Unfortunately, generating these datasets can be prohibitively expensive because of the cost of rendering thousands of high-sample count reference images. A potential solution to this problem comes from the Noise2Noise method, where denoisers can be learned solely from noisy training data. Lehtinen et al. applied their technique to Monte Carlo denoising, and were able to achieve similar performance to using clean reference images. However, their model was a proof of concept, and it is unclear whether the technique would work equally well with state-of-the-art Monte Carlo denoising methods. The authors also do not test their hypothesis that better results could be achieved by training on the additional noisy training data that could be generated with the same compute budget that was previously allocated to generating clean training data. Finally, it remains to be seen whether the authors' suggested parameters are equally effective when Noise2Noise is used with different denoising methods. In this thesis, I answer the above questions by applying Noise2Noise to a state-of-the-art Monte Carlo denoising algorithm called Sample-Based Monte-Carlo Denoising (SBMC). I adapt the SBMC scene generator to produce a dataset of noisy image pairs, use this dataset to train an SBMC-like CNN, and conduct experiments to determine the impact of various parameters on the performance of the denoiser. My results show that the Noise2Noise technique can be effectively applied to a state-of-the-art Monte Carlo denoising algorithm. I achieved comparable results to the original implementation at a significantly lower cost. I find that using additional training data can further improve these results, although more investigation is needed in this area. Finally, I detail the parameters that were necessary to achieve these results
    corecore