1,205 research outputs found
Illuminant Estimation by Voting
Obtaining an estimate of the illuminant color is an important component in many image analysis applications. Due to the complexity of the problem many restrictive assumptions are commonly applied, making the existing illuminant estimation methodologies not widely applicable on natural images. We propose a methodology which analyzes a large number of regions in an image. An illuminant estimate is obtained independently from each region and a global illumination color is computed by consensus. Each region itself is mainly composed by pixels which simultaneously exhibit both diffuse and specular reflection. This allows for a larger inclusion of pixels than purely specularitybased methods, while avoiding, at the same time, some of the restrictive assumptions of purely diffuse-based approaches. As such, our technique is particularly well-suited for analyzing real-world images. Experiments with laboratory data show that our methodology outperforms 75 % of other illuminant estimation methods. On natural images, the algorithm is very stable and provides qualitatively correct estimates. 1
ORGB: Offset Correction in RGB Color Space for Illumination-Robust Image Processing
Single materials have colors which form straight lines in RGB space. However,
in severe shadow cases, those lines do not intersect the origin, which is
inconsistent with the description of most literature. This paper is concerned
with the detection and correction of the offset between the intersection and
origin. First, we analyze the reason for forming that offset via an optical
imaging model. Second, we present a simple and effective way to detect and
remove the offset. The resulting images, named ORGB, have almost the same
appearance as the original RGB images while are more illumination-robust for
color space conversion. Besides, image processing using ORGB instead of RGB is
free from the interference of shadows. Finally, the proposed offset correction
method is applied to road detection task, improving the performance both in
quantitative and qualitative evaluations.Comment: Project website: https://baidut.github.io/ORGB
Ridge Regression Approach to Color Constancy
This thesis presents the work on color constancy and its application in the field of computer vision. Color constancy is a phenomena of representing (visualizing) the reflectance properties of the scene independent of the illumination spectrum. The motivation behind this work is two folds:The primary motivation is to seek ‘consistency and stability’ in color reproduction and algorithm performance respectively because color is used as one of the important features in many computer vision applications; therefore consistency of the color features is essential for high application success. Second motivation is to reduce ‘computational complexity’ without sacrificing the primary motivation.This work presents machine learning approach to color constancy. An empirical model is developed from the training data. Neural network and support vector machine are two prominent nonlinear learning theories. The work on support vector machine based color constancy shows its superior performance over neural networks based color constancy in terms of stability. But support vector machine is time consuming method. Alternative approach to support vectormachine, is a simple, fast and analytically solvable linear modeling technique known as ‘Ridge regression’. It learns the dependency between the surface reflectance and illumination from a presented training sample of data. Ridge regression provides answer to the two fold motivation behind this work, i.e., stable and computationally simple approach. The proposed algorithms, ‘Support vector machine’ and ‘Ridge regression’ involves three step processes: First, an input matrix constructed from the preprocessed training data set is trained toobtain a trained model. Second, test images are presented to the trained model to obtain the chromaticity estimate of the illuminants present in the testing images. Finally, linear diagonal transformation is performed to obtain the color corrected image. The results show the effectiveness of the proposed algorithms on both calibrated and uncalibrated data set in comparison to the methods discussed in literature review. Finally, thesis concludes with a complete discussion and summary on comparison between the proposed approaches and other algorithms
Measured Albedo in the Wild: Filling the Gap in Intrinsics Evaluation
Intrinsic image decomposition and inverse rendering are long-standing
problems in computer vision. To evaluate albedo recovery, most algorithms
report their quantitative performance with a mean Weighted Human Disagreement
Rate (WHDR) metric on the IIW dataset. However, WHDR focuses only on relative
albedo values and often fails to capture overall quality of the albedo. In
order to comprehensively evaluate albedo, we collect a new dataset, Measured
Albedo in the Wild (MAW), and propose three new metrics that complement WHDR:
intensity, chromaticity and texture metrics. We show that existing algorithms
often improve WHDR metric but perform poorly on other metrics. We then finetune
different algorithms on our MAW dataset to significantly improve the quality of
the reconstructed albedo both quantitatively and qualitatively. Since the
proposed intensity, chromaticity, and texture metrics and the WHDR are all
complementary we further introduce a relative performance measure that captures
average performance. By analysing existing algorithms we show that there is
significant room for improvement. Our dataset and evaluation metrics will
enable researchers to develop algorithms that improve albedo reconstruction.
Code and Data available at: https://measuredalbedo.github.io/Comment: Accepted into ICCP202
- …