25 research outputs found

    Multispectral Color Constancy: Real Image Tests

    Get PDF
    Experiments using real images are conducted on a variety of color constancy algorithms (Chromagenic, Greyworld, Max RGB, and a Maloney-Wandell extension called Subspace Testing) in order to determine whether or not extending the number of channels from 3 to 6 to 9 would enhance the accuracy with which they estimate the scene illuminant color. To create the 6 and 9 channel images, filters where placed over a standard 3-channel color camera. Although some improvement is found with 6 channels, the results indicate that essentially the extra channels do not help as much as might be expected

    Chromagenic filter design

    Get PDF
    ABSTRACT A chromagenic camera captures a pair of RGB images of a scene. Both images are captured as in a conventional digital imaging device but one of the pair is optically pre-filtered using a socalled chromagenic filter. It has been shown that the information in such a pair of images makes it easier to solve certain problems in colour vision. For example it can help to solve the illuminant estimation problem, provided that the chromagenic filter used when capturing the image pair is chosen carefully. In this paper we investigate two schemes for deriving the "optimal" filter for a chromagenic device in the context of the illuminant estimation problem and we show that the choice of filter does indeed have a significant effect on algorithm performance

    Revisiting and evaluating colour constancy and colour stabilisation algorithms

    Get PDF
    When we capture a scene with a digital camera, the sensor generates a digital response which is the Raw image. This response depends on the ambient light, the object reflectance and the sensitivity of the camera. The generated image is processed with the the camera pipeline, which is a series of operations aiming at processing the colours of the image to make it more pleasant for the user. Further colour processing can also be performed on the pipeline output image. This said, processing the colours is not only important for aesthetic reasons, but also for various computer vision tasks where a faithful reproduction of the scene colours is needed e.g. for object recognition and tracking. In this thesis, we focus on two important colour processing operations: colour constancy and colour stabilisation. Colour constancy is the ability of a visual system to see an object with the same colour independently of the light colour; the camera processes the image so the scene looks like captured under a canonical light, usually a white light. This means that when we take two images of, let’s say, a green apple in the sunlight and indoor under a tungsten light, we want the apple to appear green in both cases. To do that one important step of the pipeline is to estimate the light colour in the scene to then discount it from the image. In this thesis we first focus on the illuminant estimation problem, in particular on the performance evaluation of illuminant estimation algorithms on the benchmark ColorChecker dataset. More precisely, we show the importance of the accuracy of the ground-truth illuminants when evaluating algorithms and comparing them. The following part of the thesis is about chromagenic illuminant estimation which is based on using two images of the scene: one filtered and one unfiltered where the two images need to be registered. We revisit the preprocessing step (colour correction) of the chromagenic method and we introduce the use of the Monge-Kantorovitch transform (MKT) that removes the need for the expensive registration task. We also introduce two new datasets of chromagenic images for the evaluation of illuminant estimation methods. The last part of the thesis is about colour stabilisation which is particularly important in video processing, where consistency of colours is required across image frames. When the camera moves or when the shooting parameters change, the same object in the scene can appear with different colours in two consecutive frames. To solve for colour stabilisation given a pair of images of the same scene we need to process the first image to match the second. We propose using MKT to find the mapping. Our novel method gives competitive results compared to other recent methods while being less computationally expensive

    Multispectral Colour Constancy

    Get PDF
    Does extending the number of channels from the 3 RGB sensors of a colour camera to 6 or 9 using a multispectral camera enhance the performance of illumination-estimation algorithms? Experiments are conducted with a variety of colour constancy algorithms (Maloney-Wandell, Chromagenic, Greyworld, Max RGB, and a Maloney-Wandell extension) measuring their performance as a function of the number of sensor channels. Although minor improvements were found with 6 channels, overall the results indicate that multispectral imagery is unlikely to lead to substantially better illumination-estimation performance

    Estimating varying illuminant colours in images

    Get PDF
    Colour Constancy is the ability to perceive colours independently of varying illumi-nation colour. A human could tell that a white t-shirt was indeed white, even under the presence of blue or red illumination. These illuminant colours would actually make the reflectance colour of the t-shirt bluish or reddish. Humans can, to a good extent, see colours constantly. Getting a computer to achieve the same goal, with a high level of accuracy has proven problematic. Particularly if we wanted to use colour as a main cue in object recognition. If we trained a system on object colours under one illuminant and then tried to recognise the objects under another illuminant, the system would likely fail. Early colour constancy algorithms assumed that an image contains a single uniform illuminant. They would then attempt to estimate the colour of the illuminant to apply a single correction to the entire image. It’s not hard to imagine a scenario where a scene is lit by more than one illuminant. If we take the case of an outdoors scene on a typical summers day, we would see objects brightly lit by sunlight and others that are in shadow. The ambient light in shadows is known to be a different colour to that of direct sunlight (bluish and yellowish respectively). This means that there are at least two illuminant colours to be recovered in this scene. This thesis focuses on the harder case of recovering the illuminant colours when more than one are present in a scene. Early work on this subject made the empirical observation that illuminant colours are actually very predictable compared to surface colours. Real-world illuminants tend not to be greens or purples, but rather blues, yellows and reds. We can think of an illuminant mapping as the function which takes a scene from some unknown illuminant to a known illuminant. We model this mapping as a simple multiplication of the Red, Green and Blue channels of a pixel. It turns out that the set of realistic mappings approximately lies on a line segment in chromaticity space. We propose an algorithm that uses this knowledge and only requires two pixels of the same surface under two illuminants as input. We can then recover an estimate for the surface reflectance colour, and subsequently the two illuminants. Additionally in this thesis, we propose a more robust algorithm that can use vary-ing surface reflectance data in a scene. One of the most successful colour constancy algorithms, known Gamut Mappping, was developed by Forsyth (1990). He argued that the illuminant colour of a scene naturally constrains the surfaces colours that are possible to perceive. We couldn’t perceive a very chromatic red under a deep blue illuminant. We introduce our multiple illuminant constraint in a Gamut Mapping context and are able to further improve it’s performance. The final piece of work proposes a method for detecting shadow-edges, so that we can automatically recover estimates for the illuminant colours in and out of shadow. We also formulate our illuminant estimation algorithm in a voting scheme, that probabilistically chooses an illuminant estimate on both sides of the shadow edge. We test the performance of all our algorithms experimentally on well known datasets, as well as our new proposed shadow datasets

    Green Stability Assumption: Unsupervised Learning for Statistics-Based Illumination Estimation

    Full text link
    In the image processing pipeline of almost every digital camera there is a part dedicated to computational color constancy i.e. to removing the influence of illumination on the colors of the image scene. Some of the best known illumination estimation methods are the so called statistics-based methods. They are less accurate than the learning-based illumination estimation methods, but they are faster and simpler to implement in embedded systems, which is one of the reasons for their widespread usage. Although in the relevant literature it often appears as if they require no training, this is not true because they have parameter values that need to be fine-tuned in order to be more accurate. In this paper it is first shown that the accuracy of statistics-based methods reported in most papers was not obtained by means of the necessary cross-validation, but by using the whole benchmark datasets for both training and testing. After that the corrected results are given for the best known benchmark datasets. Finally, the so called green stability assumption is proposed that can be used to fine-tune the values of the parameters of the statistics-based methods by using only non-calibrated images without known ground-truth illumination. The obtained accuracy is practically the same as when using calibrated training images, but the whole process is much faster. The experimental results are presented and discussed. The source code is available at http://www.fer.unizg.hr/ipg/resources/color_constancy/.Comment: 5 pages, 3 figure

    Automatic and accurate shadow detection from (potentially) a single image using near-infrared information

    Get PDF
    Shadows, due to their prevalence in natural images, are a long studied phenomenon in digital photography and computer vision. Indeed, their presence can be a hindrance for a number of algorithms; accurate detection (and sometimes subsequent removal) of shadows in images is thus of paramount importance. In this paper, we present a method to detect shadows in a fast and accurate manner. To do so, we employ the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. We start by observing that commonly encountered light sources have very distinct spectra in the NIR, and propose that ratios of the colour channels (red, green and blue) to the NIR image gives valuable information about impinging illumination. In addition, we assume that shadows are contained in the darker parts of an image for both visible and NIR. This latter assumption is corroborated by the fact that a number of colorants are transparent to the NIR, thus making parts of the image that are dark in both the visible and NIR prime shadow candidates. These hypotheses allow for fast, accurate shadow detection in real, complex, scenes, including soft and occlusion shadows. We demonstrate that the process is reliable enough to be performed in-camera on still mosaicked images by simulating a modified colour filter array (CFA) that can simultaneously capture NIR and visible images. Finally, we show that our binary shadow maps can be the input of a matting algorithm to improve their precision in a fully automatic manner

    Illuminant retrieval for fixed location cameras

    Get PDF
    Fixed location cameras, such as panoramic cameras or surveillance cameras, are very common. In images taken with these cameras, there will be changes in lighting and dynamic image content, but there will also be constant objects in the background. We propose to solve for color constancy in this framework. We use a set of images to recover the scenes’ illuminants using only a few surfaces present in the scene. Our method retrieves the illuminant in every image by minimizing the difference between the reflectance spectra of the redundant elements’ surfaces or, more precisely, between their corresponding sensor response values. It is assumed that these spectra are constant across images taken under different illuminants. We also recover an estimate of the reflectance spectra of the selected elements. Experiments on synthetic and real images validate our method

    Daylight illuminant retrieval using redundant image elements

    Get PDF
    We present a method for retrieving illuminant spectra from a set of images taken with a fixed location camera, such as a surveillance or panoramic one. In these images, there will be significant changes in lighting conditions and scene content, but there will also be static elements in the background. As color constancy is an under-determined problem, we propose to exploit the redundancy and constancy offered by the static image elements to reduce the dimensionality of the problem. Specifically, we assume that the reflectance properties of these objects remain constant across the images taken with a given fixed camera. We demonstrate that we can retrieve illuminant and reflectance spectra in this framework by modeling the redundant image elements as a set of synthetic RGB patches. We define an error function that takes the RGB patches and a set of test illuminants as input and returns a similarity measure of the redundant surfaces reflectances. The test illuminants are then varied until the error function is minimized, returning the illuminants under which each image in the set was captured. This is achieved by gradient descent, providing an optimization method that is robust to shot noise
    corecore