288 research outputs found

    Color Constancy Algorithm for Mixed-illuminant Scene Images

    Get PDF
    The intrinsic properties of the ambient illuminant significantly alter the true colors of objects within an image. Most existing color constancy algorithms assume a uniformly lit scene across the image. The performance of these algorithms deteriorates considerably in the presence of mixed illuminants. Hence, a potential solution to this problem is the consideration of a combination of image regional color constancy weighing factors (CCWFs) in determining the CCWF for each pixel. This paper presents a color constancy algorithm for mixed-illuminant scene images. The proposed algorithm splits the input image into multiple segments and uses the normalized average absolute difference (NAAD) of each segment as a measure for determining whether the segment’s pixels contain reliable color constancy information. The Max-RGB principle is then used to calculate the initial weighting factors for each selected segment. The CCWF for each image pixel is then calculated by combining the weighting factors of the selected segments, which are adjusted by the normalized Euclidian distances of the pixel from the centers of the selected segments. Experimental results on images from five benchmark datasets show that the proposed algorithm subjectively outperforms the state-of-the-art techniques, while its objective performance is comparable to those of the state-of-the-art techniques

    Color Constancy Convolutional Autoencoder

    Full text link
    In this paper, we study the importance of pre-training for the generalization capability in the color constancy problem. We propose two novel approaches based on convolutional autoencoders: an unsupervised pre-training algorithm using a fine-tuned encoder and a semi-supervised pre-training algorithm using a novel composite-loss function. This enables us to solve the data scarcity problem and achieve competitive, to the state-of-the-art, results while requiring much fewer parameters on ColorChecker RECommended dataset. We further study the over-fitting phenomenon on the recently introduced version of INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both field and non-field scenes acquired by three different camera models.Comment: 6 pages, 1 figure, 3 table

    Colour Constancy: Biologically-inspired Contrast Variant Pooling Mechanism

    Get PDF
    Pooling is a ubiquitous operation in image processing algorithms that allows for higher-level processes to collect relevant low-level features from a region of interest. Currently, max-pooling is one of the most commonly used operators in the computational literature. However, it can lack robustness to outliers due to the fact that it relies merely on the peak of a function. Pooling mechanisms are also present in the primate visual cortex where neurons of higher cortical areas pool signals from lower ones. The receptive fields of these neurons have been shown to vary according to the contrast by aggregating signals over a larger region in the presence of low contrast stimuli. We hypothesise that this contrast-variant-pooling mechanism can address some of the shortcomings of max-pooling. We modelled this contrast variation through a histogram clipping in which the percentage of pooled signal is inversely proportional to the local contrast of an image. We tested our hypothesis by applying it to the phenomenon of colour constancy where a number of popular algorithms utilise a max-pooling step (e.g. White-Patch, Grey-Edge and Double-Opponency). For each of these methods, we investigated the consequences of replacing their original max-pooling by the proposed contrast-variant-pooling. Our experiments on three colour constancy benchmark datasets suggest that previous results can significantly improve by adopting a contrast-variant-pooling mechanism

    Color Constancy Adjustment using Sub-blocks of the Image

    Get PDF
    Extreme presence of the source light in digital images decreases the performance of many image processing algorithms, such as video analytics, object tracking and image segmentation. This paper presents a color constancy adjustment technique, which lessens the impact of large unvarying color areas of the image on the performance of the existing statistical based color correction algorithms. The proposed algorithm splits the input image into several non-overlapping blocks. It uses the Average Absolute Difference (AAD) value of each block’s color component as a measure to determine if the block has adequate color information to contribute to the color adjustment of the whole image. It is shown through experiments that by excluding the unvarying color areas of the image, the performances of the existing statistical-based color constancy methods are significantly improved. The experimental results of four benchmark image datasets validate that the proposed framework using Gray World, Max-RGB and Shades of Gray statistics-based methods’ images have significantly higher subjective and competitive objective color constancy than those of the existing and the state-of-the-art methods’ images

    Colour Constancy For Non‐Uniform Illuminant using Image Textures

    Get PDF
    Colour constancy (CC) is the ability to perceive the true colour of the scene on its image regardless of the scene’s illuminant changes. Colour constancy is a significant part of the digital image processing pipeline, more precisely, where true colour of the object is needed. Most existing CC algorithms assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performance is influenced by the presence of multiple light sources. This paper presents a colour constancy algorithm using image texture for uniform/non-uniformly lit scene images. The propose algorithm applies the K-means algorithm to segment the input image based on its different colour feature. Each segment’s texture is then extracted using the Entropy analysis algorithm. The colour information of the texture pixels is then used to calculate initial colour constancy adjustment factor for each segment. Finally, the colour constancy adjustment factors for each pixel within the image is determined by fusing the colour constancy of all segment regulated by the Euclidian distance of each pixel from the centre of the segments. Experimental results on both single and multiple illuminant image datasets show that the proposed algorithm outperforms the existing state of the art colour constancy algorithms, particularly when the images lit by multiple light sources

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device
    corecore