95 research outputs found

    Reflectance, illumination, and appearance in color constancy

    Get PDF
    We studied color constancy using a pair of identical 3-D Color Mondrian displays. We viewed one 3-D Mondrian in nearly uniform illumination, and the other in directional, nonuniform illumination. We used the three dimensional structures to modulate the light falling on the painted surfaces. The 3-D structures in the displays were a matching set of wooden blocks. Across Mondrian displays, each corresponding facet had the same paint on its surface. We used only 6 chromatic, and 5 achromatic paints applied to 104 block facets. The 3-D blocks add shadows and multiple reflections not found in flat Mondrians. Both 3-D Mondrians were viewed simultaneously, side-by-side. We used two techniques to measure correlation of appearance with surface reflectance. First, observers made magnitude estimates of changes in the appearances of identical reflectances. Second, an author painted a watercolor of the 3-D Mondrians. The watercolor's reflectances quantified the changes in appearances. While constancy generalizations about illumination and reflectance hold for flat Mondrians, they do not for 3-D Mondrians. A constant paint does not exhibit perfect color constancy, but rather shows significant shifts in lightness, hue and chroma in response to the structure in the nonuniform illumination. Color appearance depends on the spatial information in both the illumination and the reflectances of objects. The spatial information of the quanta catch from the array of retinal receptors generates sensations that have variable correlation with surface reflectance. Models of appearance in humans need to calculate the departures from perfect constancy measured here. This article provides a dataset of measurements of color appearances for computational models of sensation. © 2014 McCann, Parraman and Rizzi

    Color image enhancement using a Retinex-based adaptive filter

    Get PDF
    We present a new adaptation of Retinex to enhance the rendering of high dynamic range digital color images. The image is processed using an adaptive Gaussian filter. The shape of the filter basis is adapted to follow the high contrasted edges of the image. In this way, the artifacts introduced by a circularly symmetric filter at the border of high contrasted areas are reduced. This method provides a way of rendering natural images that is inspired by human local adaptation. It is included into a framework that takes raw linear images or radiance maps and outputs 24-bit images rendered for display

    Bio-inspired image enhancement for natural color images

    Get PDF
    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more exible than other existing ones. The presented results show that our method suitably enhances high dynamic range images

    Tuning Retinex Parameters

    Get PDF
    Our goal is to understand how the Retinex parameters affect the predictions of the model. A simplified Retinex computation is specified in the recent MATLABℱ implementation; however, there remain several free parameters that introduce significant variability into the model’s predictions. We extend previous work on specifying these parameters. In particular, instead of looking for fixed values for the parameters, we establish methods that automatically determine values for them based on the input image. These methods are tested on the McCann-McKee-Taylor asymmetric matching data, along with some previously unpublished data that include simultaneous contrast targets

    Retinex in MATLABℱ

    Get PDF
    Many different descriptions of Retinex methods of lightness computation exist. We provide concise MATLABℱ implementations of two of the spatial techniques of making pixel comparisons. The code is presented, along with test results on several images and a discussion of the results. We also discuss the calibration of input images and the postRetinex processing required to display the output images

    High Dynamic Range Image Rendering Using a Retinex-Based Adaptive Filter

    Get PDF
    We propose a new method to render high dynamic range images that models global and local adaptation of the human visual system. Our method is based on the center-surround Retinex model. The novelties of our method is first to use an adaptive surround, whose shape follows the image high contrast edges, thus reducing halo artifacts common to other methods. Secondly, only the luminance channel is processed, which is defined by the first component of a principal component analysis. Principal component analysis provides orthogonality between channels and thus reduces the chromatic changes caused by the modification of luminance. We show that our method efficiently renders high dynamic range images and we compare our results with the current state of the art

    Scene relighting and editing for improved object insertion

    Get PDF
    Abstract. The goal of this thesis is to develop a scene relighting and object insertion pipeline using Neural Radiance Fields (NeRF) to incorporate one or more objects into an outdoor environment scene. The output is a 3D mesh that embodies decomposed bidirectional reflectance distribution function (BRDF) characteristics, which interact with varying light source positions and strengths. To achieve this objective, the thesis is divided into two sub-tasks. The first sub-task involves extracting visual information about the outdoor environment from a sparse set of corresponding images. A neural representation is constructed, providing a comprehensive understanding of the constituent elements, such as materials, geometry, illumination, and shadows. The second sub-task involves generating a neural representation of the inserted object using either real-world images or synthetic data. To accomplish these objectives, the thesis draws on existing literature in computer vision and computer graphics. Different approaches are assessed to identify their advantages and disadvantages, with detailed descriptions of the chosen techniques provided, highlighting their functioning to produce the ultimate outcome. Overall, this thesis aims to provide a framework for compositing and relighting that is grounded in NeRF and allows for the seamless integration of objects into outdoor environments. The outcome of this work has potential applications in various domains, such as visual effects, gaming, and virtual reality

    Appearance-based image splitting for HDR display systems

    Get PDF
    High dynamic range displays that incorporate two optically-coupled image planes have recently been developed. This dual image plane design requires that a given HDR input image be split into two complementary standard dynamic range components that drive the coupled systems, therefore there existing image splitting issue. In this research, two types of HDR display systems (hardcopy and softcopy HDR display) are constructed to facilitate the study of HDR image splitting algorithm for building HDR displays. A new HDR image splitting algorithm which incorporates iCAM06 image appearance model is proposed, seeking to create displayed HDR images that can provide better image quality. The new algorithm has potential to improve image details perception, colorfulness and better gamut utilization. Finally, the performance of the new iCAM06-based HDR image splitting algorithm is evaluated and compared with widely spread luminance square root algorithm through psychophysical studies
    • 

    corecore