785 research outputs found
A Hybrid Strategy for Illuminant Estimation Targeting Hard Images
Illumination estimation is a well-studied topic in computer vision. Early work reported performance on benchmark datasets using simple statistical aggregates such as mean or median error. Recently, it has become accepted to report a wider range of statistics, e.g. top 25%, mean, and bottom 25% performance. While these additional statistics are more informative, their relationship across different methods is unclear. In this paper, we analyse the results of a number of methods to see if there exist ‘hard’ images that are challenging for multiple methods. Our findings indicate that there are certain images that are difficult for fast statistical-based methods, but that can be handled with more complex learning-based approaches at a significant cost in time-complexity. This has led us to design a hybrid method that first classifies an image as ‘hard’ or ‘easy’ and then uses the slower method when needed, thus providing a balance between time-complexity and performance. In addition, we have identified dataset images that almost no method is able to process. We argue, however, that these images have problems with how the ground truth is established and recommend their removal from future performance evaluation
Physical-based optimization for non-physical image dehazing methods
Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits their performance. In this paper, we propose a post-processing technique to alleviate this handicap by enforcing the original method to be consistent with a popular physical model for image formation under haze. Our results improve upon those of the original methods qualitatively and according to several metrics, and they have also been validated via psychophysical experiments. These results are particularly striking in terms of avoiding over-saturation and reducing color artifacts, which are the most common shortcomings faced by image dehazing methods
Color homography
We show the surprising result that colors across a change in viewing
condition (changing light color, shading and camera) are related by a
homography. Our homography color correction application delivers improved color
fidelity compared with the linear least-square.Comment: Accepted by Progress in Colour Studies 201
The alternating least squares technique for nonuniform intensity color correction
Color correction involves mapping device RGBs to display counterparts or to corresponding XYZs. A popular methodology is to take an image of a color chart and then solve for the best 3 × 3 matrix that maps the RGBs to the corresponding known XYZs. However, this approach fails at times when the intensity of the light varies across the chart. This variation needs to be removed before estimating the correction matrix. This is typically achieved by acquiring an image of a uniform gray chart in the same location, and then dividing the color checker image by the gray-chart image. Of course, taking images of two charts doubles the complexity of color correction. In this article, we present an alternative color correction algorithm that simultaneously estimates the intensity variation and the 3 × 3 transformation matrix from a single image of a color chart. We show that the color correction problem, that is, finding the 3 × 3 correction matrix, can be solved using a simple alternating least-squares procedure. Experiments validate our approach. © 2014 Wiley Periodicals, Inc. Col Res Appl, 40, 232–242, 201
Interactive Illumination Invariance
Illumination effects cause problems for many computer vision algorithms. We present a user-friendly interactive system for robust illumination-invariant image generation. Compared with the previous automated illumination-invariant image derivation approaches, our system enables users to specify a particular kind of illumination variation for removal. The derivation of illumination-invariant image is guided by the user input. The input is a stroke that defines an area covering a set of pixels whose intensities are influenced predominately by the illumination variation. This additional flexibility enhances the robustness for processing non-linearly rendered images and the images of the scenes where their illumination variations are difficult to estimate automatically. Finally, we present some evaluation results of our method
Root-Polynomial Color Homography Color Correction
Homographies are at the heart of computer vision and they are used in geometric camera calibration, image registration, and stereo vision and other tasks. In geometric computer vision, two images of the same 3D plane captured in two different viewing locations are related by a planar (2D) homography. Recent work showed that the concept of a planar homography mapping can be applied to shading-invariant color correction. In this paper, we extend the color homography color correction idea by incorporating higher order root-polynomial terms into the color correction problem formulation. Our experiments show that our new shading-invariant color correction method can obtain yet more accurate and stable performance compared with the previous 2D color homography method
A Psychophysical Analysis of Illuminant Estimation Algorithms
Illuminant estimation algorithms are often evaluated by calculating recovery angular error which is the angle between the RGB of the ground truth and the estimated illuminants. However, the same scene viewed under two different lights with respect to which the same algorithm delivers illuminant estimates and then identical reproductions - and so, the practical estimation error is the same - can, in fact and counterintuitively, result in quite different recovery errors. Reproduction angular error has been recently introduced as an improvement to recovery angular error. The new metric calculates the angle between the RGB values of a white surface corrected by the ground truth illuminant and corrected by the estimated illuminant. Experiments show that illuminant estimation algorithms could be ranked differently depending on whether they are evaluated by recovery or reproduction angular error. In this paper a psychophysical experiment is designed which demonstrates that observers choices on 'what makes a good reproduction' correlates with reproduction error and not recovery error
Maximum Ignorance Polynomial Colour Correction
In colour correction, we map the RGBs captured by a camera to human visual system referenced colour coordinates including sRGB and CIE XYZ. Two of the simplest methods reported are linear and polynomial regression. However, to obtain optimal performance using regression – especially for a polynomial based method - requires a large corpus of training data and this is time consuming to obtain. If one has access to device spectral sensitivities, then an alternative approach is to generate RGBs synthetically (we numerically generate camera RGBs from measured surface reflectances and light spectra). Advantageously, there is no limit to the number of training samples we might use. In the limit – under the so-called maximum ignorance with positivity colour correction - all possible colour signals are assumed. In this work, we revisit the maximum ignorance idea in the context of polynomial regression. The formulation of the problem is much trickier, but we show – albeit with some tedious derivation – how we can solve for the polynomial regression matrix in closed form. Empirically, however, this new polynomial maximum ignorance regression delivers significantly poorer colour correction performance compared with a physical target based method. So, this negative result teaches that the maximum ignorance technique is not directly applicable to non-linear methods. However, the derivation of this result leads to some interesting mathematical insights which point to how a maximum-ignorance type approach can be followed
Unifying optimization methods for color filter design
Through optimization we can solve for a filter that when the camera views the world through this filter, it is more colorimetric. Previous work solved for the filter that best satisfied the Luther condition: the camera spectral sensitivities after filtering were approximately a linear transform from the CIE XYZ color matching functions. A more recent method optimized for the filter that maximized the Vora-Value (a measure which relates to the closeness of the vector spaces spanned by the camera sensors and human vision sensors). The optimized Luther- and Vora-filters are different from one another. In this paper we begin by observing that the function defining the Vora-Value is equivalent to the Luther-condition optimization if we use the orthonormal basis of the XYZ color matching functions, i.e. we linearly transform the XYZ sensitivities to a set of orthonormal basis. In this formulation, the Luther-optimization algorithm is shown to almost optimize the Vora-Value. Moreover, experiments demonstrate that the modified orthonormal Luther-method finds the same color filter compared to the Vora-Value filter optimization. Significantly, our modified algorithm is simpler in formulation and also converges faster than the direct Vora-Value method
- …