3 research outputs found
Color naming guided intrinsic image decomposition
Intrinsic image decomposition is a severely under-constrained problem. User
interactions can help to reduce the ambiguity of the decomposition
considerably. The traditional way of user interaction is to draw scribbles that
indicate regions with constant reflectance or shading. However the effect
scopes of the scribbles are quite limited, so dozens of scribbles are often
needed to rectify the whole decomposition, which is time consuming. In this
paper we propose an efficient way of user interaction that users need only to
annotate the color composition of the image. Color composition reveals the
global distribution of reflectance, so it can help to adapt the whole
decomposition directly. We build a generative model of the process that the
albedo of the material produces both the reflectance through imaging and the
color labels by color naming. Our model fuses effectively the physical
properties of image formation and the top-down information from human color
perception. Experimental results show that color naming can improve the
performance of intrinsic image decomposition, especially in cleaning the
shadows left in reflectance and solving the color constancy problem
Consistency-aware Shading Orders Selective Fusion for Intrinsic Image Decomposition
We address the problem of decomposing a single image into reflectance and
shading. The difficulty comes from the fact that the components of image---the
surface albedo, the direct illumination, and the ambient illumination---are
coupled heavily in observed image. We propose to infer the shading by ordering
pixels by their relative brightness, without knowing the absolute values of the
image components beforehand. The pairwise shading orders are estimated in two
ways: brightness order and low-order fittings of local shading field. The
brightness order is a non-local measure, which can be applied to any pair of
pixels including those whose reflectance and shading are both different. The
low-order fittings are used for pixel pairs within local regions of smooth
shading. Together, they can capture both global order structure and local
variations of the shading. We propose a Consistency-aware Selective Fusion
(CSF) to integrate the pairwise orders into a globally consistent order. The
iterative selection process solves the conflicts between the pairwise orders
obtained by different estimation methods. Inconsistent or unreliable pairwise
orders will be automatically excluded from the fusion to avoid polluting the
global order. Experiments on the MIT Intrinsic Image dataset show that the
proposed model is effective at recovering the shading including deep shadows.
Our model also works well on natural images from the IIW dataset, the UIUC
Shadow dataset and the NYU-Depth dataset, where the colors of direct lights and
ambient lights are quite different
Deep intrinsic decomposition trained on surreal scenes yet with realistic light effects
Estimation of intrinsic images still remains a challenging task due to
weaknesses of ground-truth datasets, which either are too small or present
non-realistic issues. On the other hand, end-to-end deep learning architectures
start to achieve interesting results that we believe could be improved if
important physical hints were not ignored. In this work, we present a twofold
framework: (a) a flexible generation of images overcoming some classical
dataset problems such as larger size jointly with coherent lighting appearance;
and (b) a flexible architecture tying physical properties through intrinsic
losses. Our proposal is versatile, presents low computation time, and achieves
state-of-the-art results