197 research outputs found

    The reproduction angular error for evaluating the performance of illuminant estimation algorithms

    Get PDF
    The angle between the RGBs of the measured illuminant and estimated illuminant colors - the recovery angular error - has been used to evaluate the performance of the illuminant estimation algorithms. However we noticed that this metric is not in line with how the illuminant estimates are used. Normally, the illuminant estimates are ā€˜divided outā€™ from the image to, hopefully, provide image colors that are not confounded by the color of the light. However, even though the same reproduction results the same scene might have a large range of recovery errors. In this work the scale of the problem with the recovery error is quantified. Next we propose a new metric for evaluating illuminant estimation algorithms, called the reproduction angular error, which is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are ā€˜divided outā€™. Our new metric ties algorithm performance to how the illuminant estimates are used. For a given algorithm, adopting the new reproduction angular error leads to different optimal parameters. Further the ranked list of best to worst algorithms changes when the reproduction angular is used. The importance of using an appropriate performance metric is established

    A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation

    Get PDF
    In illuminant estimation, we attempt to estimate the RGB of the light. We then use this estimate on an image to correct for the light's colour bias. Illuminant estimation is an essential component of all camera reproduction pipelines. How well an illuminant estimation algorithm works is determined by how well it predicts the ground truth illuminant colour. Typically, the ground truth is the RGB of a white surface placed in a scene. Over a large set of images an estimation error is calculated and different algorithms are then ranked according to their average estimation performance. Perhaps the most widely used publically available dataset used in illuminant estimation is Gehler's Colour Checker set that was reprocessed by Shi and Funt. This image set comprises 568 images of typical everyday scenes. Curiously, we have found three different ground truths for the Shi-Funt Colour Checker image set. In this paper, we investigate whether adopting one ground truth over another results in different rankings of illuminant estimation algorithms. We find that, depending on the ground truth used, the ranking of different algorithms can change, and sometimes dramatically. Indeed, it is entirely possible that much of the recent 'advances' made in illuminant estimation were achieved because authors have switched to using a ground truth where better estimation performance is possible

    A Psychophysical Analysis of Illuminant Estimation Algorithms

    Get PDF
    Illuminant estimation algorithms are often evaluated by calculating recovery angular error which is the angle between the RGB of the ground truth and the estimated illuminants. However, the same scene viewed under two different lights with respect to which the same algorithm delivers illuminant estimates and then identical reproductions - and so, the practical estimation error is the same - can, in fact and counterintuitively, result in quite different recovery errors. Reproduction angular error has been recently introduced as an improvement to recovery angular error. The new metric calculates the angle between the RGB values of a white surface corrected by the ground truth illuminant and corrected by the estimated illuminant. Experiments show that illuminant estimation algorithms could be ranked differently depending on whether they are evaluated by recovery or reproduction angular error. In this paper a psychophysical experiment is designed which demonstrates that observers choices on 'what makes a good reproduction' correlates with reproduction error and not recovery error

    Re-evaluation of illuminant estimation algorithms in terms of reproduction results and failure cases

    Get PDF
    Illuminant estimation algorithms are usually evaluated by measuring the recovery angular error, the angle between the RGB vectors of the estimated and ground-truth illuminants. However, this metric reports a wide range of errors for an algorithm-scene pair viewed under multiple lights. In this thesis, a new metric, ā€œReproduction Angular Errorā€, is introduced which is an improvement over the old metric and enables us to evaluate the performance of the algorithms based on the reproduced white surface by the estimated illuminant rather than the estimated illuminant itself. Adopting new reproduction error is shown to both effect the overall ranking of algorithms as well as the choice of optimal parameters for particular approaches. A psychovisual image preference experiment is carried out to investigate whether human observers prefer colour balanced images predicted by, respectively, the reproduction or recovery error metric. Human observers rank algorithms mostly according to the reproduction angular error in comparison with the recovery angular error. Whether recovery or reproduction error is used, the common approach to measuring algorithm performance is to calculate accurate summary statistics over a dataset. Mean, median and percentile summary errors are often employed. However, these aggregate statistics, by definition, make it hard to predict performance for individual images or to discover whether there are certain ā€œhard imagesā€ where some illuminant estimation algorithms commonly fail. Not only do we find that such hard images exist, based only on the outputs of simple algorithms we provide an algorithm for identifying these hard images (which can then be assessed using more computationally complex advanced algorithms)

    Color Constancy Adjustment using Sub-blocks of the Image

    Get PDF
    Extreme presence of the source light in digital images decreases the performance of many image processing algorithms, such as video analytics, object tracking and image segmentation. This paper presents a color constancy adjustment technique, which lessens the impact of large unvarying color areas of the image on the performance of the existing statistical based color correction algorithms. The proposed algorithm splits the input image into several non-overlapping blocks. It uses the Average Absolute Difference (AAD) value of each blockā€™s color component as a measure to determine if the block has adequate color information to contribute to the color adjustment of the whole image. It is shown through experiments that by excluding the unvarying color areas of the image, the performances of the existing statistical-based color constancy methods are significantly improved. The experimental results of four benchmark image datasets validate that the proposed framework using Gray World, Max-RGB and Shades of Gray statistics-based methodsā€™ images have significantly higher subjective and competitive objective color constancy than those of the existing and the state-of-the-art methodsā€™ images

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided
    • ā€¦
    corecore