Understanding shading effects in images is critical for a variety of vision
and graphics problems, including intrinsic image decomposition, shadow removal,
image relighting, and inverse rendering. As is the case with other vision
tasks, machine learning is a promising approach to understanding shading - but
there is little ground truth shading data available for real-world images. We
introduce Shading Annotations in the Wild (SAW), a new large-scale, public
dataset of shading annotations in indoor scenes, comprised of multiple forms of
shading judgments obtained via crowdsourcing, along with shading annotations
automatically generated from RGB-D imagery. We use this data to train a
convolutional neural network to predict per-pixel shading information in an
image. We demonstrate the value of our data and network in an application to
intrinsic images, where we can reduce decomposition artifacts produced by
existing algorithms. Our database is available at
http://opensurfaces.cs.cornell.edu/saw/.Comment: CVPR 201