540 research outputs found

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Computing von Kries Illuminant Changes by Piecewise Inversion of Cumulative Color Histograms

    Get PDF
    We present a linear algorithm for the computation of the illuminant change occurring between two color pictures of a scene. We model the light variations with the von Kries diagonal transform and we estimate it by minimizing a dissimilarity measure between the piecewise inversions of the cumulative color histograms of the considered images. We also propose a method for illuminant invariant image recognition based on our von Kries transform estimate

    Ridge Regression Approach to Color Constancy

    Get PDF
    This thesis presents the work on color constancy and its application in the field of computer vision. Color constancy is a phenomena of representing (visualizing) the reflectance properties of the scene independent of the illumination spectrum. The motivation behind this work is two folds:The primary motivation is to seek ‘consistency and stability’ in color reproduction and algorithm performance respectively because color is used as one of the important features in many computer vision applications; therefore consistency of the color features is essential for high application success. Second motivation is to reduce ‘computational complexity’ without sacrificing the primary motivation.This work presents machine learning approach to color constancy. An empirical model is developed from the training data. Neural network and support vector machine are two prominent nonlinear learning theories. The work on support vector machine based color constancy shows its superior performance over neural networks based color constancy in terms of stability. But support vector machine is time consuming method. Alternative approach to support vectormachine, is a simple, fast and analytically solvable linear modeling technique known as ‘Ridge regression’. It learns the dependency between the surface reflectance and illumination from a presented training sample of data. Ridge regression provides answer to the two fold motivation behind this work, i.e., stable and computationally simple approach. The proposed algorithms, ‘Support vector machine’ and ‘Ridge regression’ involves three step processes: First, an input matrix constructed from the preprocessed training data set is trained toobtain a trained model. Second, test images are presented to the trained model to obtain the chromaticity estimate of the illuminants present in the testing images. Finally, linear diagonal transformation is performed to obtain the color corrected image. The results show the effectiveness of the proposed algorithms on both calibrated and uncalibrated data set in comparison to the methods discussed in literature review. Finally, thesis concludes with a complete discussion and summary on comparison between the proposed approaches and other algorithms

    Bootstrapping Color Constancy

    Get PDF
    Bootstrapping provides a novel approach to training a neural network to estimate the chromaticity of the illuminant in a scene given image data alone. For initial training, the network requires feedback about the accuracy of the network’s current results. In the case of a network for color constancy, this feedback is the chromaticity of the incident scene illumination. In the past1, perfect feedback has been used, but in the bootstrapping method feedback with a considerable degree of random error can be used to train the network instead. In particular, the grayworld algorithm2, which only provides modest color constancy performance, is used to train a neural network which in the end performs better than the grayworld algorithm used to train it

    The reproduction angular error for evaluating the performance of illuminant estimation algorithms

    Get PDF
    The angle between the RGBs of the measured illuminant and estimated illuminant colors - the recovery angular error - has been used to evaluate the performance of the illuminant estimation algorithms. However we noticed that this metric is not in line with how the illuminant estimates are used. Normally, the illuminant estimates are ‘divided out’ from the image to, hopefully, provide image colors that are not confounded by the color of the light. However, even though the same reproduction results the same scene might have a large range of recovery errors. In this work the scale of the problem with the recovery error is quantified. Next we propose a new metric for evaluating illuminant estimation algorithms, called the reproduction angular error, which is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are ‘divided out’. Our new metric ties algorithm performance to how the illuminant estimates are used. For a given algorithm, adopting the new reproduction angular error leads to different optimal parameters. Further the ranked list of best to worst algorithms changes when the reproduction angular is used. The importance of using an appropriate performance metric is established

    Object Recognition and Pose Estimation across Illumination Changes

    Get PDF
    In this paper, we present a new algorithm for color-based object recognition that detects objects and estimates their pose (position and orientation) in cluttered scenes observed under uncontrolled illumination conditions. As with so many other color-based object-recognition algorithms, color histograms are also fundamental to our approach; however, we use histograms obtained from overlapping subwindows, rather than the entire image. Furthermore, each local histogram is normalized using greyworld normalization in order to be as less sensitive to illumination as possible. An object from a database of prototype objects is identified and located in an input image by matching the subwindow contents. The prototype is detected in the input whenever many good histogram matches are found between the subwindows of the input image and those of the prototype. In essence, normalized color histograms of subwindows are the local features being matched. Once an object has been recognized, its 2D pose is found by approximating the geometrical transformation most consistently mapping the locations of prototype’s subwindows to their matched subwindow locations in the input image
    • …
    corecore