51,383 research outputs found
DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination
In this paper we are extracting surface reflectance and natural environmental
illumination from a reflectance map, i.e. from a single 2D image of a sphere of
one material under one illumination. This is a notoriously difficult problem,
yet key to various re-rendering applications. With the recent advances in
estimating reflectance maps from 2D images their further decomposition has
become increasingly relevant.
To this end, we propose a Convolutional Neural Network (CNN) architecture to
reconstruct both material parameters (i.e. Phong) as well as illumination (i.e.
high-resolution spherical illumination maps), that is solely trained on
synthetic data. We demonstrate that decomposition of synthetic as well as real
photographs of reflectance maps, both in High Dynamic Range (HDR), and, for the
first time, on Low Dynamic Range (LDR) as well. Results are compared to
previous approaches quantitatively as well as qualitatively in terms of
re-renderings where illumination, material, view or shape are changed.Comment: Stamatios Georgoulis and Konstantinos Rematas contributed equally to
this wor
The Visual Centrifuge: Model-Free Layered Video Representations
True video understanding requires making sense of non-lambertian scenes where
the color of light arriving at the camera sensor encodes information about not
just the last object it collided with, but about multiple mediums -- colored
windows, dirty mirrors, smoke or rain. Layered video representations have the
potential of accurately modelling realistic scenes but have so far required
stringent assumptions on motion, lighting and shape. Here we propose a
learning-based approach for multi-layered video representation: we introduce
novel uncertainty-capturing 3D convolutional architectures and train them to
separate blended videos. We show that these models then generalize to single
videos, where they exhibit interesting abilities: color constancy, factoring
out shadows and separating reflections. We present quantitative and qualitative
results on real world videos.Comment: Appears in: 2019 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2019). This arXiv contains the CVPR Camera Ready version of
the paper (although we have included larger figures) as well as an appendix
detailing the model architectur
Two-dimensional models as testing ground for principles and concepts of local quantum physics
In the past two-dimensional models of QFT have served as theoretical
laboratories for testing new concepts under mathematically controllable
condition. In more recent times low-dimensional models (e.g. chiral models,
factorizing models) often have been treated by special recipes in a way which
sometimes led to a loss of unity of QFT. In the present work I try to
counteract this apartheid tendency by reviewing past results within the setting
of the general principles of QFT. To this I add two new ideas: (1) a modular
interpretation of the chiral model Diff(S)-covariance with a close connection
to the recently formulated local covariance principle for QFT in curved
spacetime and (2) a derivation of the chiral model temperature duality from a
suitable operator formulation of the angular Wick rotation (in analogy to the
Nelson-Symanzik duality in the Ostertwalder-Schrader setting) for rational
chiral theories. The SL(2,Z) modular Verlinde relation is a special case of
this thermal duality and (within the family of rational models) the matrix S
appearing in the thermal duality relation becomes identified with the
statistics character matrix S. The relevant angular Euclideanization'' is done
in the setting of the Tomita-Takesaki modular formalism of operator algebras.
I find it appropriate to dedicate this work to the memory of J. A. Swieca
with whom I shared the interest in two-dimensional models as a testing ground
for QFT for more than one decade.
This is a significantly extended version of an ``Encyclopedia of Mathematical
Physics'' contribution hep-th/0502125.Comment: 55 pages, removal of some typos in section
The Definition of Mach's Principle
Two definitions of Mach's principle are proposed. Both are related to gauge
theory, are universal in scope and amount to formulations of causality that
take into account the relational nature of position, time, and size. One of
them leads directly to general relativity and may have relevance to the problem
of creating a quantum theory of gravity.Comment: To be published in Foundations of Physics as invited contribution to
Peter Mittelstaedt's 80th Birthday Festschrift. 30 page
Impact of topology in foliated Quantum Einstein Gravity
We use a functional renormalization group equation tailored to the
Arnowitt-Deser-Misner formulation of gravity to study the scale-dependence of
Newton's coupling and the cosmological constant on a background spacetime with
topology S^1xS^d. The resulting beta functions possess a non-trivial
renormalization group fixed point, which may provide the high-energy completion
of the theory through the asymptotic safety mechanism. The fixed point is
robust with respect to changing the parametrization of the metric fluctuations
and regulator scheme. The phase diagrams show that this fixed point is
connected to a classical regime through a crossover. In addition the flow may
exhibit a regime of "gravitational instability", modifying the theory in the
deep infrared. Our work complements earlier studies of the gravitational
renormalization group flow on a background topology S^1xT^d and establishes
that the flow is essentially independent of the background topology.Comment: 33 pages, 14 figure
Deep Reflectance Maps
Undoing the image formation process and therefore decomposing appearance into
its intrinsic properties is a challenging task due to the under-constraint
nature of this inverse problem. While significant progress has been made on
inferring shape, materials and illumination from images only, progress in an
unconstrained setting is still limited. We propose a convolutional neural
architecture to estimate reflectance maps of specular materials in natural
lighting conditions. We achieve this in an end-to-end learning formulation that
directly predicts a reflectance map from the image itself. We show how to
improve estimates by facilitating additional supervision in an indirect scheme
that first predicts surface orientation and afterwards predicts the reflectance
map by a learning-based sparse data interpolation.
In order to analyze performance on this difficult task, we propose a new
challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg)
using both synthetic and real images. Furthermore, we show the application of
our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM
- …