150 research outputs found

    Image Sampling with Quasicrystals

    Get PDF
    We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.Comment: For a full resolution version of this paper, along with supplementary materials, please visit at http://www.Eyemaginary.com/Portfolio/Publications.htm

    Fourier Analysis of Stochastic Sampling Strategies for Assessing Bias and Variance in Integration

    Get PDF

    Master of Science

    Get PDF
    thesisVirtual point lights (VPLs) provide an effective solution to global illumination computation by converting the indirect illumination into direct illumination from many virtual light sources. This approach results in a less noisy image compare to Monte Carlo methods. In addition, the number of VPLs to generate can be specified in advance; therefore, it can be adjusted depending on the scene, desired quality, time budget, and the available computational power. In this thesis, we investigate a new technique that carefully places VPLs for providing improved rendering quality for computing global illumination using VPLs. Our method consists of three different passes. In the first pass, we randomly generate a large number of VPLs in the scene starting from the camera to place them in positions that can contribute to the final rendered image. Then, we remove a considerable number of these VPLs using a Poisson disk sample elimination method to get a subset of VPLs that are uniformly distributed over the part of the scene that is indirectly visible to the camera. The second pass is to estimate the radiant intensity of these VPLs by performing light tracing starting from the original light sources in the scene and scatter the radiance of light rays at a hit-point to the VPLs close to that point. The final pass is rendering the scene, which consists of shading all points in the scene visible to the camera using the original light sources and VPLs

    Quasi-Monte Carlo Algorithms (not only) for Graphics Software

    Full text link
    Quasi-Monte Carlo methods have become the industry standard in computer graphics. For that purpose, efficient algorithms for low discrepancy sequences are discussed. In addition, numerical pitfalls encountered in practice are revealed. We then take a look at massively parallel quasi-Monte Carlo integro-approximation for image synthesis by light transport simulation. Beyond superior uniformity, low discrepancy points may be optimized with respect to additional criteria, such as noise characteristics at low sampling rates or the quality of low-dimensional projections

    A Gaussian process-based approach to rendering

    Full text link
    Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2023, Director: Ricardo Jorge Rodrigues Sepúlveda Marques[en] Many physically-based image rendering algorithms use the illumination integral to determine the color of each pixel in the rendered image. This integral has a component that can be sampled but has no known analytical expression, so it cannot be computed directly and must be evaluated with approximation methods. Among these we can find the Monte Carlo (MC) and the Bayesian Monte Carlo (BMC) integration methods. MC integration consists in defining a random variable such that its expected value is the solution to the integral, and then repeatedly sampling that random variable to estimate the true value. In contrast, BMC models the function to be integrated using a Gaussian Process, which allows for the incorporation of prior information. While MC is conceptually simple and straightforward to implement, it has a slower convergence rate compared to BMC. BMC, on the other hand, allows for better estimates with the same number of samples, even without prior information, by taking into account all available information about the samples, in particular the covariance of the sample locations. In this thesis, I implemented the MC and the BMC algorithms for integration, and compared their performances in two settings: for the estimation of a single integral with a known true value and for image rendering using the root-mean-squared error (RMSE). My results showed that the error of BMC converged much faster compared to MC in both settings, mirroring the existing literature on the topic. In addition, I experimented with the use of a constant prior in the BMC method, and found promising results for single integral estimation, although further work is needed to successfully apply this finding to image rendering

    Novel illumination algorithms for off-line and real-time rendering

    Get PDF
    This thesis presents new and efficient illumination algorithms for off-line and real-time rendering. The realistic rendering of arbitrary indirect illumination is a difficult task. Assuming ray optics model of light, the rendering equation describes the propagation of light in the scene with high accuracy. However, the computation is expensive, and thus even in off-line rendering, i.e., in prerendered animations, indirect illumination is often approximated as it would otherwise constitute a bottleneck in the production pipeline. Indirect illumination can be computed using Monte Carlo integration, but when restrained to a reasonable amount of computation time, the result is often corrupted by noise. This thesis includes a method that effectively reduces the noise by applying a spatially varying filter to the noisy illumination. For real-time performance, some components of indirect illumination can be precomputed. Irradiance volume and many variations of it precompute reflections and shadowing of a static scene into a volumetric data structure. This data is then used to shade dynamic objects in real-time. The practical usage of the method is limited due to aliasing artifacts. This thesis shows that with a suitable super-sampling approach, a significant quality improvement can be obtained. Another direction is to precompute how light propagates in the scene and use the precomputed data during run-time to solve both direct and indirect illumination based on the known incident lighting. To keep the memory and precomputation costs tractable, these methods are typically restricted to infinitely distant lighting. Those that are not, require a very long precomputation time. This thesis presents an algorithm that adopts a wavelet-based hierarchical finite element method for the precomputation. A significant performance improvement over the existing techniques is obtained. When full global illumination cannot be afforded, ambient occlusion is an attractive alternative. This thesis includes two methods for real-time rendering of ambient occlusion in dynamic scenes. The first method models the shadowing of ambient light between rigid moving bodies. The second method gives a data-oriented solution for rendering approximate ambient occlusion for animated characters in real-time. Both methods achieve unprecedented efficiency.reviewe

    Toward Evaluating Progressive Rendering Methods in Appearance Design Tasks

    Get PDF
    Progressive rendering is becoming a popular alternative to precomputation approaches for appearance design tasks. Images created by different progressive algorithms exhibit various kinds of visual artifacts at the early stages of computation. We present a user study that investigates the effects of these artifacts on user performance in appearance design tasks. Specifically, we ask both novice and expert subjects to perform lighting and material editing tasks with the following algorithms: random path tracing, quasi-random path tracing, progressive photon mapping, and virtual point light (VPL) rendering. Data collected from the experiments suggest that path tracing is strongly preferred to progressive photon mapping and VPL rendering by both experts and novices. There is no indication that quasi-random path tracing is systematically preferred to random path tracing or vice versa; the same holds between progressive photon mapping and VPL rendering. Interestingly, we did not observe any significant difference in user workflow for the different algorithms. As can be expected, experts are faster and more accurate than novices, but surprisingly both groups have similar subjective preferences and workflow
    corecore