162 research outputs found
An intuitive control space for material appearance
Many different techniques for measuring material appearance have been
proposed in the last few years. These have produced large public datasets,
which have been used for accurate, data-driven appearance modeling. However,
although these datasets have allowed us to reach an unprecedented level of
realism in visual appearance, editing the captured data remains a challenge. In
this paper, we present an intuitive control space for predictable editing of
captured BRDF data, which allows for artistic creation of plausible novel
material appearances, bypassing the difficulty of acquiring novel samples. We
first synthesize novel materials, extending the existing MERL dataset up to 400
mathematically valid BRDFs. We then design a large-scale experiment, gathering
56,000 subjective ratings on the high-level perceptual attributes that best
describe our extended dataset of materials. Using these ratings, we build and
train networks of radial basis functions to act as functionals mapping the
perceptual attributes to an underlying PCA-based representation of BRDFs. We
show that our functionals are excellent predictors of the perceived attributes
of appearance. Our control space enables many applications, including intuitive
material editing of a wide range of visual properties, guidance for gamut
mapping, analysis of the correlation between perceptual attributes, or novel
appearance similarity metrics. Moreover, our methodology can be used to derive
functionals applicable to classic analytic BRDF representations. We release our
code and dataset publicly, in order to support and encourage further research
in this direction
Image-based remapping of spatially-varying material appearance
BRDF models are ubiquitous tools for the representation of material
appearance. However, there is now an astonishingly large number of different
models in practical use. Both a lack of BRDF model standardisation across
implementations found in different renderers, as well as the often semantically
different capabilities of various models, have grown to be a major hindrance to
the interchange of production assets between different rendering systems.
Current attempts to solve this problem rely on manually finding visual
similarities between models, or mathematical ones between their functional
shapes, which requires access to the shader implementation, usually unavailable
in commercial renderers. We present a method for automatic translation of
material appearance between different BRDF models, which uses an image-based
metric for appearance comparison, and that delegates the interaction with the
model to the renderer. We analyse the performance of the method, both with
respect to robustness and visual differences of the fits for multiple
combinations of BRDF models. While it is effective for individual BRDFs, the
computational cost does not scale well for spatially-varying BRDFs. Therefore,
we further present a parametric regression scheme that approximates the shape
of the transformation function and generates a reduced representation which
evaluates instantly and without further interaction with the renderer. We
present respective visual comparisons of the remapped SVBRDF models for
commonly used renderers and shading models, and show that our approach is able
to extrapolate transformed BRDF parameters better than other complex regression
schemes
Metappearance: Meta-Learning for Visual Appearance Reproduction
There currently are two main approaches to reproducing visual appearance
using Machine Learning (ML): The first is training models that generalize over
different instances of a problem, e.g., different images from a dataset. Such
models learn priors over the data corpus and use this knowledge to provide fast
inference with little input, often as a one-shot operation. However, this
generality comes at the cost of fidelity, as such methods often struggle to
achieve the final quality required. The second approach does not train a model
that generalizes across the data, but overfits to a single instance of a
problem, e.g., a flash image of a material. This produces detailed and
high-quality results, but requires time-consuming training and is, as mere
non-linear function fitting, unable to exploit previous experience. Techniques
such as fine-tuning or auto-decoders combine both approaches but are sequential
and rely on per-exemplar optimization. We suggest to combine both techniques
end-to-end using meta-learning: We over-fit onto a single problem instance in
an inner loop, while also learning how to do so efficiently in an outer-loop
that builds intuition over many optimization runs. We demonstrate this concept
to be versatile and efficient, applying it to RGB textures, Bi-directional
Reflectance Distribution Functions (BRDFs), or Spatially-varying BRDFs
(svBRDFs)
Neural BRDF Representation and Importance Sampling
Controlled capture of real-world material appearance yields tabulated sets of highly realistic reflectance data. In practice, however, its high memory footprint requires compressing into a representation that can be used efficiently in rendering while remaining faithful to the original. Previous works in appearance encoding often prioritized one of these requirements at the expense of the other, by either applying high-fidelity array compression strategies not suited for efficient queries during rendering, or by fitting a compact analytic model that lacks expressiveness. We present a compact neural network-based representation of BRDF data that combines high-accuracy reconstruction with efficient practical rendering via built-in interpolation of reflectance. We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling, critical for the accurate reconstruction of specular highlights. Additionally, we propose a novel approach to make our representation amenable to importance sampling: rather than inverting the trained networks, we learn to encode them in a more compact embedding that can be mapped to parameters of an analytic BRDF for which importance sampling is known. We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets, and importance sampling performance for isotropic BRDFs mapped to two different analytic models
Learning to Learn and Sample BRDFs
We propose a method to accelerate the joint process of physically acquiring
and learning neural Bi-directional Reflectance Distribution Function (BRDF)
models. While BRDF learning alone can be accelerated by meta-learning,
acquisition remains slow as it relies on a mechanical process. We show that
meta-learning can be extended to optimize the physical sampling pattern, too.
After our method has been meta-trained for a set of fully-sampled BRDFs, it is
able to quickly train on new BRDFs with up to five orders of magnitude fewer
physical acquisition samples at similar quality. Our approach also extends to
other linear and non-linear BRDF models, which we show in an extensive
evaluation
PS-FCN: A Flexible Learning Framework for Photometric Stereo
This paper addresses the problem of photometric stereo for non-Lambertian
surfaces. Existing approaches often adopt simplified reflectance models to make
the problem more tractable, but this greatly hinders their applications on
real-world objects. In this paper, we propose a deep fully convolutional
network, called PS-FCN, that takes an arbitrary number of images of a static
object captured under different light directions with a fixed camera as input,
and predicts a normal map of the object in a fast feed-forward pass. Unlike the
recently proposed learning based method, PS-FCN does not require a pre-defined
set of light directions during training and testing, and can handle multiple
images and light directions in an order-agnostic manner. Although we train
PS-FCN on synthetic data, it can generalize well on real datasets. We further
show that PS-FCN can be easily extended to handle the problem of uncalibrated
photometric stereo.Extensive experiments on public real datasets show that
PS-FCN outperforms existing approaches in calibrated photometric stereo, and
promising results are achieved in uncalibrated scenario, clearly demonstrating
its effectiveness.Comment: ECCV 2018: https://guanyingc.github.io/PS-FC
- …