5 research outputs found

    Artificial Color Constancy via GoogLeNet with Angular Loss Function

    Full text link
    Color Constancy is the ability of the human visual system to perceive colors unchanged independently of the illumination. Giving a machine this feature will be beneficial in many fields where chromatic information is used. Particularly, it significantly improves scene understanding and object recognition. In this paper, we propose transfer learning-based algorithm, which has two main features: accuracy higher than many state-of-the-art algorithms and simplicity of implementation. Despite the fact that GoogLeNet was used in the experiments, given approach may be applied to any CNN. Additionally, we discuss design of a new loss function oriented specifically to this problem, and propose a few the most suitable options

    Illuminant Estimation By Deep Learning

    Get PDF
    Computational color constancy refers to the problem of estimating the color of the scene illumination in a color image, followed by color correction of the image through a white balancing process so that the colors of the image will be viewed as if the image was captured under a neutral white light source, and hence producing a plausible natural looking image. The illuminant estimation part is still a challenging task due to the ill-posed nature of the problem, and many methods have been proposed in the literature while each follows a certain approach in an attempt to improve the performance of the Auto-white balancing system for accurately estimating the illumination color for better image correction. These methods can typically be categorized into static-based and learning-based methods. Most of the proposed methods follow the learning-based approach because of its higher estimation accuracy compared to the former which relies on simple assumptions. While many of those learning-based methods show a satisfactory performance in general, they are built upon extracting handcrafted features which require a deep knowledge of the color image processing. More recent learning-based methods have shown higher improvements in illuminant estimation through using Deep Learning (DL) systems presented by the Convolutional Neural Networks (CNNs) that automatically learned to extract useful features from the given image dataset. In this thesis, we present a highly effective Deep Learning approach which treats the illuminant estimation problem as an illuminant classification task by learning a Convolutional Neural Network to classify input images belonging to certain pre-defined illuminant classes. Then, the output of the CNN which is in the form of class probabilities is used for computing the illuminant color estimate. Since training a deep CNN requires large number of training examples to avoid the “overfitting” problem, most of the recent CNN-based illuminant estimation methods attempted to overcome the limited number of images in the benchmark illuminant estimation dataset by sampling input images to multiple smaller patches as a way of data augmentation, but this can adversely affect the CNN training performance because some of these patches may not contain any semantic information and therefore, can be considered as noisy examples for the CNN that can lead to estimation ambiguity. However, in this thesis, we propose a novel approach for dataset augmentation through synthesizing images with different illuminations using the ground-truth illuminant color of other training images, which enhanced the performance of the CNN training compared to similar previous methods. Experimental results on the standard illuminant estimation benchmark dataset show that the proposed solution outperforms most of the previous illuminant estimation methods and show a competitive performance to the state-of-the-art methods
    corecore