13,692 research outputs found
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications
We present an overview and evaluation of a new, systematic approach for
generation of highly realistic, annotated synthetic data for training of deep
neural networks in computer vision tasks. The main contribution is a procedural
world modeling approach enabling high variability coupled with physically
accurate image synthesis, and is a departure from the hand-modeled virtual
worlds and approximate image synthesis methods used in real-time applications.
The benefits of our approach include flexible, physically accurate and scalable
image synthesis, implicit wide coverage of classes and features, and complete
data introspection for annotations, which all contribute to quality and cost
efficiency. To evaluate our approach and the efficacy of the resulting data, we
use semantic segmentation for autonomous vehicles and robotic navigation as the
main application, and we train multiple deep learning architectures using
synthetic data with and without fine tuning on organic (i.e. real-world) data.
The evaluation shows that our approach improves the neural network's
performance and that even modest implementation efforts produce
state-of-the-art results.Comment: The project web page at
http://vcl.itn.liu.se/publications/2017/TKWU17/ contains a version of the
paper with high-resolution images as well as additional materia
Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences
Machine learning based Single Image Intrinsic Decomposition (SIID) methods
decompose a captured scene into its albedo and shading images by using the
knowledge of a large set of known and realistic ground truth decompositions.
Collecting and annotating such a dataset is an approach that cannot scale to
sufficient variety and realism. We free ourselves from this limitation by
training on unannotated images.
Our method leverages the observation that two images of the same scene but
with different lighting provide useful information on their intrinsic
properties: by definition, albedo is invariant to lighting conditions, and
cross-combining the estimated albedo of a first image with the estimated
shading of a second one should lead back to the second one's input image. We
transcribe this relationship into a siamese training scheme for a deep
convolutional neural network that decomposes a single image into albedo and
shading. The siamese setting allows us to introduce a new loss function
including such cross-combinations, and to train solely on (time-lapse) images,
discarding the need for any ground truth annotations.
As a result, our method has the good properties of i) taking advantage of the
time-varying information of image sequences in the (pre-computed) training
step, ii) not requiring ground truth data to train on, and iii) being able to
decompose single images of unseen scenes at runtime. To demonstrate and
evaluate our work, we additionally propose a new rendered dataset containing
illumination-varying scenes and a set of quantitative metrics to evaluate SIID
algorithms. Despite its unsupervised nature, our results compete with state of
the art methods, including supervised and non data-driven methods.Comment: To appear in Pacific Graphics 201
A Survey of Ocean Simulation and Rendering Techniques in Computer Graphics
This paper presents a survey of ocean simulation and rendering methods in
computer graphics. To model and animate the ocean's surface, these methods
mainly rely on two main approaches: on the one hand, those which approximate
ocean dynamics with parametric, spectral or hybrid models and use empirical
laws from oceanographic research. We will see that this type of methods
essentially allows the simulation of ocean scenes in the deep water domain,
without breaking waves. On the other hand, physically-based methods use
Navier-Stokes Equations (NSE) to represent breaking waves and more generally
ocean surface near the shore. We also describe ocean rendering methods in
computer graphics, with a special interest in the simulation of phenomena such
as foam and spray, and light's interaction with the ocean surface
- …