5 research outputs found

    The potential of computationally rendered images for the evaluation of lighting quality in interior spaces

    Get PDF
    Lighting designers have the ability to show, and help everyone visualize the outcome of the lighting design by using tools such as calculations, mock-ups, and renderings. In recent years, the use of digital renderings instead of mock-up installations has become increasingly popular.;Even though the rendering methods that exist today offer the possibility to accurately simulate a scene, this does not guarantee that the images will be interpreted and perceived correctly. Increased applications of computer graphics which demand high levels of realism has made it necessary to examine the manner in which these images are evaluated and validated.;The objective of our research is to determine if classic lighting studies can be explored in a contemporary setting by using computationally rendered images, and to identify to what extent the subjective evaluation of the lighting conditions of an interior space can be reproduced using these images

    Perceptual error optimization for Monte Carlo rendering

    Full text link
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods

    Using the visual differences predictor to improve performance of progressive global illumination computation

    No full text
    A novel view-independent technique for progressive global illumination computing that uses prediction of visible differences to improve both efficiency and effectiveness of physicallysound lighting solutions has been developed. The technique is a mixture of stochastic (density estimation) and deterministic (adaptive mesh refinement) algorithms used in a sequence and optimized to reduce the differences between the intermediate and final images as perceived by the human observer in the course of lighting computation. The quantitative measurements of visibility were obtained using the model of human vision captured in the visible differences predictor (VDP) developed by Daly [1993]. The VDP responses were used to support the selection of the best component algorithms from a pool of global illumination solutions, and to enhance the selected algorithms for even better progressive refinement of image quality. The VDP was also used to determine the optimal sequential order of component-algorithm execution, and to choose the points at which switchover between algorithms should take place. As the VDP is computationally expensive, it was applied exclusively at the design and tuning stage of the composite technique, and so perceptual considerations are embedded into the resulting solution, though no VDP calculations were performed during lighting simulation
    corecore