21 research outputs found

    Colour Constancy For Nonā€Uniform Illuminant using Image Textures

    Get PDF
    Colour constancy (CC) is the ability to perceive the true colour of the scene on its image regardless of the sceneā€™s illuminant changes. Colour constancy is a significant part of the digital image processing pipeline, more precisely, where true colour of the object is needed. Most existing CC algorithms assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performance is influenced by the presence of multiple light sources. This paper presents a colour constancy algorithm using image texture for uniform/non-uniformly lit scene images. The propose algorithm applies the K-means algorithm to segment the input image based on its different colour feature. Each segmentā€™s texture is then extracted using the Entropy analysis algorithm. The colour information of the texture pixels is then used to calculate initial colour constancy adjustment factor for each segment. Finally, the colour constancy adjustment factors for each pixel within the image is determined by fusing the colour constancy of all segment regulated by the Euclidian distance of each pixel from the centre of the segments. Experimental results on both single and multiple illuminant image datasets show that the proposed algorithm outperforms the existing state of the art colour constancy algorithms, particularly when the images lit by multiple light sources

    Artificial Color Constancy via GoogLeNet with Angular Loss Function

    Full text link
    Color Constancy is the ability of the human visual system to perceive colors unchanged independently of the illumination. Giving a machine this feature will be beneficial in many fields where chromatic information is used. Particularly, it significantly improves scene understanding and object recognition. In this paper, we propose transfer learning-based algorithm, which has two main features: accuracy higher than many state-of-the-art algorithms and simplicity of implementation. Despite the fact that GoogLeNet was used in the experiments, given approach may be applied to any CNN. Additionally, we discuss design of a new loss function oriented specifically to this problem, and propose a few the most suitable options

    MIMT: Multi-Illuminant Color Constancy via Multi-Task Learning

    Full text link
    The assumption of a uniform light color distribution, which holds true in single light color scenes, is no longer applicable in scenes that have multiple light colors. The spatial variability in multiple light colors causes the color constancy problem to be more challenging and requires the extraction of local surface/light information. Motivated by this, we introduce a multi-task learning method to estimate multiple light colors from a single input image. To have better cues of the local surface/light colors under multiple light color conditions, we design a multi-task learning framework with achromatic-pixel detection and surface-color similarity prediction as our auxiliary tasks. These tasks facilitate the acquisition of local light color information and surface color correlations. Moreover, to ensure that our model maintains the constancy of surface colors regardless of the variations of light colors, we also preserve local surface color features in our model. We demonstrate that our model achieves 47.1% improvement compared to a state-of-the-art multi-illuminant color constancy method on a multi-illuminant dataset (LSMI). While single light colors are not our main focus, our model also maintains a robust performance on the single illuminant dataset (NUS-8) and provides 18.5% improvement on the state-of-the-art single color constancy method.Comment: 10 pages, 6 figure
    corecore