41 research outputs found

    A Large Image Database for Color Constancy Research

    Get PDF
    We present a study on various statistics relevant to research on color constancy. Many of these analyses could not have been done before simply because a large database for color constancy was not available. Our image database consists of approximately 11,000 images in which the RGB color of the ambient illuminant in each scene is measured. To build such a large database we used a novel set-up consisting of a digital video camera with a neutral gray sphere attached to the camera so that the sphere always appears in the field of view. Using a gray sphere instead of the standard gray card facilitates measurement of the variation in illumination as a function of incident angle. The study focuses on the analysis of the distribution of various illuminants in the natural scenes and the correlation between the rg-chromaticity of colors recorded by the camera and the rg-chromaticity of the ambient illuminant. We also investigate the possibility of improving the performance of the naĂŻve Gray World algorithm by considering a sequence of consecutive frames instead of a single image. The set of images is publicly available and can also be used as a database for testing color constancy algorithms

    Colour Constancy: Biologically-inspired Contrast Variant Pooling Mechanism

    Get PDF
    Pooling is a ubiquitous operation in image processing algorithms that allows for higher-level processes to collect relevant low-level features from a region of interest. Currently, max-pooling is one of the most commonly used operators in the computational literature. However, it can lack robustness to outliers due to the fact that it relies merely on the peak of a function. Pooling mechanisms are also present in the primate visual cortex where neurons of higher cortical areas pool signals from lower ones. The receptive fields of these neurons have been shown to vary according to the contrast by aggregating signals over a larger region in the presence of low contrast stimuli. We hypothesise that this contrast-variant-pooling mechanism can address some of the shortcomings of max-pooling. We modelled this contrast variation through a histogram clipping in which the percentage of pooled signal is inversely proportional to the local contrast of an image. We tested our hypothesis by applying it to the phenomenon of colour constancy where a number of popular algorithms utilise a max-pooling step (e.g. White-Patch, Grey-Edge and Double-Opponency). For each of these methods, we investigated the consequences of replacing their original max-pooling by the proposed contrast-variant-pooling. Our experiments on three colour constancy benchmark datasets suggest that previous results can significantly improve by adopting a contrast-variant-pooling mechanism

    Color Cerberus

    Full text link
    Simple convolutional neural network was able to win ISISPA color constancy competition. Partial reimplementation of (Bianco, 2017) neural architecture would have shown even better results in this setup

    Rank-Based Illumination Estimation

    Get PDF
    A new two-stage illumination estimation method based on the concept of rank is presented. The method first estimates the illuminant locally in subwindows using a ranking of digital counts in each color channel and then combines local subwindow estimates again based on a ranking of the local estimates. The proposed method unifies the MaxRGB and Grayworld methods. Despite its simplicity, the performance of the method is found to be competitive with other state-of-the art methods for estimating the chromaticity of the overall scene illumination

    True colour retrieval from multiple illuminant scene’s image

    Get PDF
    This paper presents an algorithm to retrieve the true colour of an image captured under multiple illuminant. The proposed method uses a histogram analysis and K-means++ clustering technique to split the input image into a number of segments. It then determines normalised average absolute difference (NAAD) for each resulting segment’s colour component. If the NAAD of the segment’s component is greater than an empirically determined threshold. It assumes that the segment does not represent a uniform colour area, hence the segment’s colour component is selected to be used for image colour constancy adjustment. The initial colour balancing factor for each chosen segment’s component is calculated using the Minkowski norm based on the principal that the average values of image colour components are achromatic. It finally calculates colour constancy adjustment factors for each image pixel by fusing the initial colour constancy factors of the chosen segments weighted by the normalised Euclidian distances of the pixel from the centroids of the selected segments. Experimental results using benchmark single and multiple illuminant image datasets, show that the proposed method’s images subjectively exhibit highest colour constancy in the presence of multiple illuminant and also when image contains uniform colour areas

    Colour Constancy For Non‐Uniform Illuminant using Image Textures

    Get PDF
    Colour constancy (CC) is the ability to perceive the true colour of the scene on its image regardless of the scene’s illuminant changes. Colour constancy is a significant part of the digital image processing pipeline, more precisely, where true colour of the object is needed. Most existing CC algorithms assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performance is influenced by the presence of multiple light sources. This paper presents a colour constancy algorithm using image texture for uniform/non-uniformly lit scene images. The propose algorithm applies the K-means algorithm to segment the input image based on its different colour feature. Each segment’s texture is then extracted using the Entropy analysis algorithm. The colour information of the texture pixels is then used to calculate initial colour constancy adjustment factor for each segment. Finally, the colour constancy adjustment factors for each pixel within the image is determined by fusing the colour constancy of all segment regulated by the Euclidian distance of each pixel from the centre of the segments. Experimental results on both single and multiple illuminant image datasets show that the proposed algorithm outperforms the existing state of the art colour constancy algorithms, particularly when the images lit by multiple light sources

    Max-RGB based Colour Constancy using the Sub-blocks of the Image

    Get PDF
    Colour constancy refers to the task of revealing the true colour of an object despite ambient presence of intrinsic illuminant. The performance of most of the existing colour constancy algorithms are deteriorated when image contains a big patch of uniform colour. This paper presents a Max-RGB based colour constancy adjustment method using the sub-blocks of the image to significantly reduce the effect of the large uniform colour area of the scene on colour constancy adjustment of the image. The proposed method divides the input image into a number of non-overlapping blocks and computes the Average Absolute Difference (AAD) value of each block’s colour component. The blocks with the AADs greater than threshold values are considered having sufficient colour variation to be used for colour constancy adjustment. The Max-RGB algorithm is then applied to the selected blocks’ pixels to calculate colour constancy scaling factors for the whole image. Evaluations of the performance of the proposed method on images of three benchmark datasets show that the proposed method outperforms the state of the art techniques in the presence of large uniform colour patches
    corecore