183 research outputs found

    How Multi-Illuminant Scenes Affect Automatic Colour Balancing

    Get PDF
    Many illumination-estimation methods are based on the assumption that the imaged scene is lit by a single course of illumination; however, this assumption is often violated in practice. We investigate the effect this has on a suite of illumination-estimation methods by manually sorting the Gehler et al. ColorChecker set of 568 images into the 310 of them that are approximately single-illuminant and the 258 that are clearly multiple-illuminant and comparing the performance of the various methods on the two sets. The Grayworld, Spatio-Spectral-Statistics and Thin-Plate-Spline methods are relatively unaffected, but the other methods are all affected to varying degrees

    Estimating varying illuminant colours in images

    Get PDF
    Colour Constancy is the ability to perceive colours independently of varying illumi-nation colour. A human could tell that a white t-shirt was indeed white, even under the presence of blue or red illumination. These illuminant colours would actually make the reflectance colour of the t-shirt bluish or reddish. Humans can, to a good extent, see colours constantly. Getting a computer to achieve the same goal, with a high level of accuracy has proven problematic. Particularly if we wanted to use colour as a main cue in object recognition. If we trained a system on object colours under one illuminant and then tried to recognise the objects under another illuminant, the system would likely fail. Early colour constancy algorithms assumed that an image contains a single uniform illuminant. They would then attempt to estimate the colour of the illuminant to apply a single correction to the entire image. It’s not hard to imagine a scenario where a scene is lit by more than one illuminant. If we take the case of an outdoors scene on a typical summers day, we would see objects brightly lit by sunlight and others that are in shadow. The ambient light in shadows is known to be a different colour to that of direct sunlight (bluish and yellowish respectively). This means that there are at least two illuminant colours to be recovered in this scene. This thesis focuses on the harder case of recovering the illuminant colours when more than one are present in a scene. Early work on this subject made the empirical observation that illuminant colours are actually very predictable compared to surface colours. Real-world illuminants tend not to be greens or purples, but rather blues, yellows and reds. We can think of an illuminant mapping as the function which takes a scene from some unknown illuminant to a known illuminant. We model this mapping as a simple multiplication of the Red, Green and Blue channels of a pixel. It turns out that the set of realistic mappings approximately lies on a line segment in chromaticity space. We propose an algorithm that uses this knowledge and only requires two pixels of the same surface under two illuminants as input. We can then recover an estimate for the surface reflectance colour, and subsequently the two illuminants. Additionally in this thesis, we propose a more robust algorithm that can use vary-ing surface reflectance data in a scene. One of the most successful colour constancy algorithms, known Gamut Mappping, was developed by Forsyth (1990). He argued that the illuminant colour of a scene naturally constrains the surfaces colours that are possible to perceive. We couldn’t perceive a very chromatic red under a deep blue illuminant. We introduce our multiple illuminant constraint in a Gamut Mapping context and are able to further improve it’s performance. The final piece of work proposes a method for detecting shadow-edges, so that we can automatically recover estimates for the illuminant colours in and out of shadow. We also formulate our illuminant estimation algorithm in a voting scheme, that probabilistically chooses an illuminant estimate on both sides of the shadow edge. We test the performance of all our algorithms experimentally on well known datasets, as well as our new proposed shadow datasets

    Dichromatic Illumination Estimation via Hough Transforms in 3D

    Get PDF
    A new illumination-estimation method is proposed based on the dichromatic reflection model combined with Hough transform processing. Other researchers have shown that using the dichromatic reflection model under the assumption of neutral interface reflection, the color of the illuminating light can be estimated by intersecting the dichromatic planes created by two or more differently coloured regions. Our proposed method employs two Hough transforms in sequence in RGB space. The first Hough Transform creates a dichromatic plane histogram representing the number of pixels belonging to dichromatic planes created by differently coloured scene regions. The second Hough Transform creates an illumination axis histogram representing the total number of pixels satisfying the dichromatic model for each posited illumination axis. This method overcomes limitations of previous approaches that include requirements such as: that the number of distinct surfaces be known in advance, that the image be presegmented into regions of uniform colour, and that the image contain distinct specularities. Many of these methods rely on the assumption that there are sufficiently large, connected regions of a single, highly specular material in the scene. Comparing the performance of the proposed approach with previous non-training methods on a set of real images, the proposed method yields better results while requiring no prior knowledge of the image content

    Color correction pipeline optimization for digital cameras

    Get PDF
    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital cam- era: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipe- lines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accu- racy. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publica- tion, including its DOI. (DOI: 10.1117/1.JEI.22.2.023014

    Extending minkowski norm illuminant estimation

    Get PDF
    The ability to obtain colour images invariant to changes of illumination is called colour constancy. An algorithm for colour constancy takes sensor responses - digital images - as input, estimates the ambient light and returns a corrected image in which the illuminant influence over the colours has been removed. In this thesis we investigate the step of illuminant estimation for colour constancy and aim to extend the state of the art in this field. We first revisit the Minkowski Family Norm framework for illuminant estimation. Because, of all the simple statistical approaches, it is the most general formulation and, crucially, delivers the best results. This thesis makes four technical contributions. First, we reformulate the Minkowski approach to provide better estimation when a constraint on illumination is employed. Second, we show how the method can (by orders of magnitude) be implemented to run much faster than previous algorithms. Third, we show how a simple edge based variant delivers improved estimation compared with the state of the art across many datasets. In contradistinction to the prior state of the art our definition of edges is fixed (a simple combination of first and second derivatives) i.e. we do not tune our algorithm to particular image datasets. This performance is further improved by incorporating a gamut constraint on surface colour -our 4th contribution. The thesis finishes by considering our approach in the context of a recent OSA competition run to benchmark computational algorithms operating on physiologically relevant cone based input data. Here we find that Constrained Minkowski Norms operi ii ating on spectrally sharpened cone sensors (linear combinations of the cones that behave more like camera sensors) supports competition leading illuminant estimation

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device

    A testing procedure to characterize color and spatial quality of digital cameras used to image cultural heritage

    Get PDF
    A testing procedure for characterizing both the color and spatial image quality of trichromatic digital cameras, which are used to photograph paintings in cultural heritage institutions, is described. This testing procedure is target-based, thus providing objective measures of quality. The majority of the testing procedure followed current standards from national and international organizations such as ANSI, ISO, and IEC. The procedure was developed in an academic research laboratory and used to benchmark four representative American museum’s digital-camera systems and workflows. The quality parameters tested included system spatial uniformity, tone reproduction, color reproduction accuracy, noise, dynamic range, spatial cross-talk, spatial frequency response, color-channel registration, and depth of field. In addition, two paintings were imaged and processed through each museum’s normal digital workflow. The results of the four case studies showed many dissimilarities among the digital-camera systems and workflows of American museums, which causes a significant range in the archival quality of their digital masters

    Simultaneous image color correction and enhancement using particle swarm optimization

    Full text link
    Color images captured under various environments are often not ready to deliver the desired quality due to adverse effects caused by uncontrollable illumination settings. In particular, when the illuminate color is not known a priori, the colors of the objects may not be faithfully reproduced and thus impose difficulties in subsequent image processing operations. Color correction thus becomes a very important pre-processing procedure where the goal is to produce an image as if it is captured under uniform chromatic illumination. On the other hand, conventional color correction algorithms using linear gain adjustments focus only on color manipulations and may not convey the maximum information contained in the image. This challenge can be posed as a multi-objective optimization problem that simultaneously corrects the undesirable effect of illumination color cast while recovering the information conveyed from the scene. A variation of the particle swarm optimization algorithm is further developed in the multi-objective optimization perspective that results in a solution achieving a desirable color balance and an adequate delivery of information. Experiments are conducted using a collection of color images of natural objects that were captured under different lighting conditions. Results have shown that the proposed method is capable of delivering images with higher quality. © 2013 Elsevier Ltd. All rights reserved
    corecore