1,214 research outputs found
An intuitive control space for material appearance
Many different techniques for measuring material appearance have been
proposed in the last few years. These have produced large public datasets,
which have been used for accurate, data-driven appearance modeling. However,
although these datasets have allowed us to reach an unprecedented level of
realism in visual appearance, editing the captured data remains a challenge. In
this paper, we present an intuitive control space for predictable editing of
captured BRDF data, which allows for artistic creation of plausible novel
material appearances, bypassing the difficulty of acquiring novel samples. We
first synthesize novel materials, extending the existing MERL dataset up to 400
mathematically valid BRDFs. We then design a large-scale experiment, gathering
56,000 subjective ratings on the high-level perceptual attributes that best
describe our extended dataset of materials. Using these ratings, we build and
train networks of radial basis functions to act as functionals mapping the
perceptual attributes to an underlying PCA-based representation of BRDFs. We
show that our functionals are excellent predictors of the perceived attributes
of appearance. Our control space enables many applications, including intuitive
material editing of a wide range of visual properties, guidance for gamut
mapping, analysis of the correlation between perceptual attributes, or novel
appearance similarity metrics. Moreover, our methodology can be used to derive
functionals applicable to classic analytic BRDF representations. We release our
code and dataset publicly, in order to support and encourage further research
in this direction
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
- …