22 research outputs found

    Cubical Gamut Mapping Colour Constancy

    Get PDF
    A new color constancy algorithm called Cubical Gamut Mapping (CGM) is introduced. CGM is computationally very simple, yet performs better than many currently known algorithms in terms of median illumination estimation error. Moreover, it can be tuned to minimize the maximum error. Being able to reduce the maximum error, possibly at the expense of increased median error, is an advantage over many published color constancy algorithms, which may perform quite well in terms of median illumination-estimation error, but have very poor worst-case performance. CGM is based on principles similar to existing gamut mapping algorithms; however, it represents the gamut of image chromaticities as a simple cube characterized by the image’s maximum and minimum rgb chromaticities rather than their more complicated convex hull. It also uses the maximal RGBs as an additional source of information about the illuminant. The estimate of the scene illuminant is obtained by linearly mapping the chromaticity of the maximum RGB, minimum rgb and maximum rgb values. The algorithm is trained off-line on a set of synthetically generated images. Linear programming techniques for optimizing the mapping both in terms of the sum of errors and in terms of the maximum error are used. CGM uses a very simple image pre-processing stage that does not require image segmentation. For each pixel in the image, the pixels in the Nby- N surrounding block are averaged. The pixels for which at least one of the neighbouring pixels in the N-by-N surrounding block differs from the average by more than a given threshold are removed. This pre-processing not only improves CGM, but also improves the performance of other published algorithms such as max RGB and Grey World

    Reducing Worst-Case Illumination Estimates for Better Automatic White Balance

    Get PDF
    Automatic white balancing works quite well on average, but seriously fails some of the time. These failures lead to completely unacceptable images. Can the number, or severity, of these failures be reduced, perhaps at the expense of slightly poorer white balancing on average, with the overall goal being to increase the overall acceptability of a collection of images? Since the main source of error in automatic white balancing arises from misidentifying the overall scene illuminant, a new illuminationestimation algorithm is presented that minimizes the high percentile error of its estimates. The algorithm combines illumination estimates from standard existing algorithms and chromaticity gamut characteristics of the image as features in a feature space. Illuminant chromaticities are quantized into chromaticity bins. Given a test image of a real scene, its feature vector is computed, and for each chromaticity bin, the probability of the illuminant chromaticity falling into a chromaticity bin given the feature vector is estimated. The probability estimation is based on Loftsgaarden-Quesenberry multivariate density function estimation over the feature vectors derived from a set of synthetic training images. Once the probability distribution estimate for a given chromaticity channel is known, the smallest interval that is likely to contain the right answer with a desired probability (i.e., the smallest chromaticity interval whose sum of probabilities is greater or equal to the desired probability) is chosen. The point in the middle of that interval is then reported as the chromaticity of the illuminant. Testing on a dataset of real images shows that the error at the 90th and 98th percentile ranges can be reduced by roughly half, with minimal impact on the mean error

    Color Constancy Adjustment using Sub-blocks of the Image

    Get PDF
    Extreme presence of the source light in digital images decreases the performance of many image processing algorithms, such as video analytics, object tracking and image segmentation. This paper presents a color constancy adjustment technique, which lessens the impact of large unvarying color areas of the image on the performance of the existing statistical based color correction algorithms. The proposed algorithm splits the input image into several non-overlapping blocks. It uses the Average Absolute Difference (AAD) value of each block’s color component as a measure to determine if the block has adequate color information to contribute to the color adjustment of the whole image. It is shown through experiments that by excluding the unvarying color areas of the image, the performances of the existing statistical-based color constancy methods are significantly improved. The experimental results of four benchmark image datasets validate that the proposed framework using Gray World, Max-RGB and Shades of Gray statistics-based methods’ images have significantly higher subjective and competitive objective color constancy than those of the existing and the state-of-the-art methods’ images

    Colour constancy beyond the classical receptive field

    Get PDF
    The problem of removing illuminant variations to preserve the colours of objects (colour constancy) has already been solved by the human brain using mechanisms that rely largely on centre-surround computations of local contrast. In this paper we adopt some of these biological solutions described by long known physiological findings into a simple, fully automatic, functional model (termed Adaptive Surround Modulation or ASM). In ASM, the size of a visual neuron's receptive field (RF) as well as the relationship with its surround varies according to the local contrast within the stimulus, which in turn determines the nature of the centre-surround normalisation of cortical neurons higher up in the processing chain. We modelled colour constancy by means of two overlapping asymmetric Gaussian kernels whose sizes are adapted based on the contrast of the surround pixels, resembling the change of RF size. We simulated the contrast-dependent surround modulation by weighting the contribution of each Gaussian according to the centre-surround contrast. In the end, we obtained an estimation of the illuminant from the set of the most activated RFs' outputs. Our results on three single-illuminant and one multi-illuminant benchmark datasets show that ASM is highly competitive against the state-of-the-art and it even outperforms learning-based algorithms in one case. Moreover, the robustness of our model is more tangible if we consider that our results were obtained using the same parameters for all datasets, that is, mimicking how the human visual system operates. These results suggest a dynamical adaptation mechanisms contribute to achieving higher accuracy in computational colour constancy

    Extending minkowski norm illuminant estimation

    Get PDF
    The ability to obtain colour images invariant to changes of illumination is called colour constancy. An algorithm for colour constancy takes sensor responses - digital images - as input, estimates the ambient light and returns a corrected image in which the illuminant influence over the colours has been removed. In this thesis we investigate the step of illuminant estimation for colour constancy and aim to extend the state of the art in this field. We first revisit the Minkowski Family Norm framework for illuminant estimation. Because, of all the simple statistical approaches, it is the most general formulation and, crucially, delivers the best results. This thesis makes four technical contributions. First, we reformulate the Minkowski approach to provide better estimation when a constraint on illumination is employed. Second, we show how the method can (by orders of magnitude) be implemented to run much faster than previous algorithms. Third, we show how a simple edge based variant delivers improved estimation compared with the state of the art across many datasets. In contradistinction to the prior state of the art our definition of edges is fixed (a simple combination of first and second derivatives) i.e. we do not tune our algorithm to particular image datasets. This performance is further improved by incorporating a gamut constraint on surface colour -our 4th contribution. The thesis finishes by considering our approach in the context of a recent OSA competition run to benchmark computational algorithms operating on physiologically relevant cone based input data. Here we find that Constrained Minkowski Norms operi ii ating on spectrally sharpened cone sensors (linear combinations of the cones that behave more like camera sensors) supports competition leading illuminant estimation

    Semantik renk değişmezliği

    Get PDF
    Color constancy aims to perceive the actual color of an object, disregarding the effectof the light source. Recent works showed that utilizing the semantic information inan image enhances the performance of the computational color constancy methods.Considering the recent success of the segmentation methods and the increased numberof labeled images, we propose a color constancy method that combines individualilluminant estimations of detected objects which are computed using the classes of theobjects and their associated colors. Then we introduce a weighting system that valuesthe applicability of the object classes to the color constancy problem. Lastly, weintroduce another metric expressing the detected object and how well it fits the learnedmodel of its class. Finally, we evaluate our proposed method on a popular colorconstancy dataset, confirming that each weight addition enhances the performanceof the global illuminant estimation. Experimental results show promising results,outperforming the conventional methods while competing with the state of the artmethods.--M.S. - Master of Scienc

    Removing Outliers in Illumination Estimation

    Get PDF
    A method of outlier detection is proposed as a way of improving illumination-estimation performance in general, and for scenes with multiple sources of illumination in particular. Based on random sample consensus (RANSAC), the proposed method (i) makes estimates of the illumination chromaticity from multiple, randomly sampled sub-images of the input image; (ii) fits a model to the estimates; (iii) makes further estimates, which are classified as useful or not on the basis of the initial model; (iv) and produces a final estimate based on the ones classified as being useful. Tests on the Gehler colorchecker set of 568 images demonstrate that the proposed method works well, improves upon the performance of the base algorithm it uses for obtaining the sub-image estimates, and can roughly identify the image areas corresponding to different scene illuminants

    Illuminant Estimation By Deep Learning

    Get PDF
    Computational color constancy refers to the problem of estimating the color of the scene illumination in a color image, followed by color correction of the image through a white balancing process so that the colors of the image will be viewed as if the image was captured under a neutral white light source, and hence producing a plausible natural looking image. The illuminant estimation part is still a challenging task due to the ill-posed nature of the problem, and many methods have been proposed in the literature while each follows a certain approach in an attempt to improve the performance of the Auto-white balancing system for accurately estimating the illumination color for better image correction. These methods can typically be categorized into static-based and learning-based methods. Most of the proposed methods follow the learning-based approach because of its higher estimation accuracy compared to the former which relies on simple assumptions. While many of those learning-based methods show a satisfactory performance in general, they are built upon extracting handcrafted features which require a deep knowledge of the color image processing. More recent learning-based methods have shown higher improvements in illuminant estimation through using Deep Learning (DL) systems presented by the Convolutional Neural Networks (CNNs) that automatically learned to extract useful features from the given image dataset. In this thesis, we present a highly effective Deep Learning approach which treats the illuminant estimation problem as an illuminant classification task by learning a Convolutional Neural Network to classify input images belonging to certain pre-defined illuminant classes. Then, the output of the CNN which is in the form of class probabilities is used for computing the illuminant color estimate. Since training a deep CNN requires large number of training examples to avoid the “overfitting” problem, most of the recent CNN-based illuminant estimation methods attempted to overcome the limited number of images in the benchmark illuminant estimation dataset by sampling input images to multiple smaller patches as a way of data augmentation, but this can adversely affect the CNN training performance because some of these patches may not contain any semantic information and therefore, can be considered as noisy examples for the CNN that can lead to estimation ambiguity. However, in this thesis, we propose a novel approach for dataset augmentation through synthesizing images with different illuminations using the ground-truth illuminant color of other training images, which enhanced the performance of the CNN training compared to similar previous methods. Experimental results on the standard illuminant estimation benchmark dataset show that the proposed solution outperforms most of the previous illuminant estimation methods and show a competitive performance to the state-of-the-art methods

    A STUDY OF ILLUMINANT ESTIMATION AND GROUND TRUTH COLORS FOR COLOR CONSTANCY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore