9 research outputs found

    Colour Constancy For Nonā€Uniform Illuminant using Image Textures

    Get PDF
    Colour constancy (CC) is the ability to perceive the true colour of the scene on its image regardless of the sceneā€™s illuminant changes. Colour constancy is a significant part of the digital image processing pipeline, more precisely, where true colour of the object is needed. Most existing CC algorithms assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performance is influenced by the presence of multiple light sources. This paper presents a colour constancy algorithm using image texture for uniform/non-uniformly lit scene images. The propose algorithm applies the K-means algorithm to segment the input image based on its different colour feature. Each segmentā€™s texture is then extracted using the Entropy analysis algorithm. The colour information of the texture pixels is then used to calculate initial colour constancy adjustment factor for each segment. Finally, the colour constancy adjustment factors for each pixel within the image is determined by fusing the colour constancy of all segment regulated by the Euclidian distance of each pixel from the centre of the segments. Experimental results on both single and multiple illuminant image datasets show that the proposed algorithm outperforms the existing state of the art colour constancy algorithms, particularly when the images lit by multiple light sources

    Green Stability Assumption: Unsupervised Learning for Statistics-Based Illumination Estimation

    Full text link
    In the image processing pipeline of almost every digital camera there is a part dedicated to computational color constancy i.e. to removing the influence of illumination on the colors of the image scene. Some of the best known illumination estimation methods are the so called statistics-based methods. They are less accurate than the learning-based illumination estimation methods, but they are faster and simpler to implement in embedded systems, which is one of the reasons for their widespread usage. Although in the relevant literature it often appears as if they require no training, this is not true because they have parameter values that need to be fine-tuned in order to be more accurate. In this paper it is first shown that the accuracy of statistics-based methods reported in most papers was not obtained by means of the necessary cross-validation, but by using the whole benchmark datasets for both training and testing. After that the corrected results are given for the best known benchmark datasets. Finally, the so called green stability assumption is proposed that can be used to fine-tune the values of the parameters of the statistics-based methods by using only non-calibrated images without known ground-truth illumination. The obtained accuracy is practically the same as when using calibrated training images, but the whole process is much faster. The experimental results are presented and discussed. The source code is available at http://www.fer.unizg.hr/ipg/resources/color_constancy/.Comment: 5 pages, 3 figure

    Color Constancy Adjustment using Sub-blocks of the Image

    Get PDF
    Extreme presence of the source light in digital images decreases the performance of many image processing algorithms, such as video analytics, object tracking and image segmentation. This paper presents a color constancy adjustment technique, which lessens the impact of large unvarying color areas of the image on the performance of the existing statistical based color correction algorithms. The proposed algorithm splits the input image into several non-overlapping blocks. It uses the Average Absolute Difference (AAD) value of each blockā€™s color component as a measure to determine if the block has adequate color information to contribute to the color adjustment of the whole image. It is shown through experiments that by excluding the unvarying color areas of the image, the performances of the existing statistical-based color constancy methods are significantly improved. The experimental results of four benchmark image datasets validate that the proposed framework using Gray World, Max-RGB and Shades of Gray statistics-based methodsā€™ images have significantly higher subjective and competitive objective color constancy than those of the existing and the state-of-the-art methodsā€™ images

    Colour Constancy for Image of Non-Uniformly Lit Scenes

    Get PDF
    Digital camera sensors are designed to record all incident light from a captured scene but they are unable to distinguish between the colour of the light source and the true colour of objects. The resulting captured image exhibits a colour cast toward the colour of light source. This paper presents a colour constancy algorithm for images of scenes lit by non-uniform light sources. The proposed algorithm uses a histogram-based algorithm to determine the number of colour regions. It then applies the K-means++ algorithm on the input image, dividing the image into its segments. The proposed algorithm computes the Normalized Average Absolute Difference (NAAD) for each segment and uses it as a measure to determine if the segment has sufficient colour variations. The initial colour constancy adjustment factors for each segment with sufficient colour variation is calculated. The Colour Constancy Adjustment Weighting Factors (CCAWF) for each pixel of the image are determined by fusing the CCAWFs of the segments, weighted by their normalized Euclidian distance of the pixel from the center of the segments. Results show that the proposed method outperforms the statistical techniques and its images exhibit significantly higher subjective quality to those of the learning-based methods. In addition, the execution time of the proposed algorithm is comparable to statistical-based techniques and is much lower than those of the state-of-the-art learning-based methods

    Color Constancy for Uniform and Non-uniform Illuminant Using Image Texture

    Get PDF
    Color constancy is the capability to observe the true color of a scene from its image regardless of the sceneā€™s illuminant. It is a significant part of the digital image processing pipeline and is utilized when the true color of an object is required. Most existing color constancy methods assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performances are influenced by the presence of multiple light sources. This paper presents a color constancy adjustment technique that uses the texture of the image pixels to select pixels with sufficient color variation to be used for image color correction. The proposed technique applies a histogram-based algorithm to determine the appropriate number of segments to efficiently split the image into its key color variation areas. The K-means++ algorithm is then used to divide the input image into the pre-determined number of segments. The proposed algorithm identifies pixels with sufficient color variation in each segment using the entropies of the pixels, which represent the segmentā€™s texture. Then, the algorithm calculates the initial color constancy adjustment factors for each segment by applying an existing statistics-based color constancy algorithm on the selected pixels. Finally, the proposed method computes color adjustment factors per pixel within the image by fusing the initial color adjustment factors of all segments, which are regulated by the Euclidian distances of each pixel from the centers of gravity of the segments. Experimental results on benchmark single- and multiple-illuminant image datasets show that the images that are obtained using the proposed algorithm have significantly higher subjective and very competitive objective qualities compared to those that are obtained with the state-of-the-art techniques

    Improving the white patch method by subsampling

    No full text
    corecore