2,819 research outputs found
SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry, Illumination, and Material Estimation
We present a novel approach for digitizing real-world objects by estimating
their geometry, material properties, and environmental lighting from a set of
posed images with fixed lighting. Our method incorporates into Neural Radiance
Field (NeRF) pipelines the split sum approximation used with image-based
lighting for real-time physical-based rendering. We propose modeling the
scene's lighting with a single scene-specific MLP representing pre-integrated
image-based lighting at arbitrary resolutions. We achieve accurate modeling of
pre-integrated lighting by exploiting a novel regularizer based on efficient
Monte Carlo sampling. Additionally, we propose a new method of supervising
self-occlusion predictions by exploiting a similar regularizer based on Monte
Carlo sampling. Experimental results demonstrate the efficiency and
effectiveness of our approach in estimating scene geometry, material
properties, and lighting. Our method is capable of attaining state-of-the-art
relighting quality after only hour of training in a single NVIDIA
A100 GPU
Hierarchical Variance Reduction Techniques for Monte Carlo Rendering
Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields
Extensive light profile fitting of galaxy-scale strong lenses
We investigate the merits of a massive forward modeling of ground-based
optical imaging as a diagnostic for the strong lensing nature of Early-Type
Galaxies, in the light of which blurred and faint Einstein rings can hide. We
simulate several thousand mock strong lenses under ground- and space-based
conditions as arising from the deflection of an exponential disk by a
foreground de Vaucouleurs light profile whose lensing potential is described by
a Singular Isothermal Ellipsoid. We then fit for the lensed light distribution
with sl_fit after having subtracted the foreground light emission off (ideal
case) and also after having fitted the deflector's light with galfit. By
setting thresholds in the output parameter space, we can decide the
lens/not-a-lens status of each system. We finally apply our strategy to a
sample of 517 lens candidates present in the CFHTLS data to test the
consistency of our selection approach. The efficiency of the fast modeling
method at recovering the main lens parameters like Einstein radius, total
magnification or total lensed flux, is quite comparable under CFHT and HST
conditions when the deflector is perfectly subtracted off (only possible in
simulations), fostering a sharp distinction between the good and the bad
candidates. Conversely, for a more realistic subtraction, a substantial
fraction of the lensed light is absorbed into the deflector's model, which
biases the subsequent fitting of the rings and then disturbs the selection
process. We quantify completeness and purity of the lens finding method in both
cases. This suggests that the main limitation currently resides in the
subtraction of the foreground light. Provided further enhancement of the
latter, the direct forward modeling of large numbers of galaxy-galaxy strong
lenses thus appears tractable and could constitute a competitive lens finder in
the next generation of wide-field imaging surveys.Comment: A&A accepted version, minor changes (13 pages, 10 figures
Effects of the halo concentration distribution on strong-lensing optical depth and X-ray emission
We use simulated merger trees of galaxy-cluster halos to study the effect of
the halo concentration distribution on strong lensing and X-ray emission. Its
log-normal shape typically found in simulations favors outliers with high
concentration. Since, at fixed mass, more concentrated halos tend to be more
efficient lenses, the scatter in the concentration increases the strong-lensing
optical depth by . Within cluster samples, mass and concentration
have counteracting effects on strong lensing and X-ray emission because the
concentration decreases for increasing mass. Selecting clusters by
concentration thus has no effect on the lensing cross section. The most
efficiently lensing and hottest clusters are typically the \textit{least}
concentrated in samples with a broad mass range. Among cluster samples with a
narrow mass range, however, the most strongly lensing and X-ray brightest
clusters are typically 10% to 25% more concentrated.Comment: 12 pages, 10 figures. Version accepted by A&
Statistical Searches for Microlensing Events in Large, Non-Uniformly Sampled Time-Domain Surveys: A Test Using Palomar Transient Factory Data
Many photometric time-domain surveys are driven by specific goals, such as
searches for supernovae or transiting exoplanets, which set the cadence with
which fields are re-imaged. In the case of the Palomar Transient Factory (PTF),
several sub-surveys are conducted in parallel, leading to non-uniform sampling
over its footprint. While the median PTF field has been imaged 40 times in \textit{R}-band,
have been observed 100 times. We use PTF data to
study the trade-off between searching for microlensing events in a survey whose
footprint is much larger than that of typical microlensing searches, but with
far-from-optimal time sampling. To examine the probability that microlensing
events can be recovered in these data, we test statistics used on uniformly
sampled data to identify variables and transients. We find that the von Neumann
ratio performs best for identifying simulated microlensing events in our data.
We develop a selection method using this statistic and apply it to data from
fields with 10 -band observations, light curves,
uncovering three candidate microlensing events. We lack simultaneous,
multi-color photometry to confirm these as microlensing events. However, their
number is consistent with predictions for the event rate in the PTF footprint
over the survey's three years of operations, as estimated from near-field
microlensing models. This work can help constrain all-sky event rate
predictions and tests microlensing signal recovery in large data sets, which
will be useful to future time-domain surveys, such as that planned with the
Large Synoptic Survey Telescope.Comment: 13 pages, 14 figures; accepted for publication in ApJ. fixed author
lis
Gravitational Lensing by Spinning Black Holes in Astrophysics, and in the Movie Interstellar
Interstellar is the first Hollywood movie to attempt depicting a black hole
as it would actually be seen by somebody nearby. For this we developed a code
called DNGR (Double Negative Gravitational Renderer) to solve the equations for
ray-bundle (light-beam) propagation through the curved spacetime of a spinning
(Kerr) black hole, and to render IMAX-quality, rapidly changing images. Our
ray-bundle techniques were crucial for achieving IMAX-quality smoothness
without flickering.
This paper has four purposes: (i) To describe DNGR for physicists and CGI
practitioners . (ii) To present the equations we use, when the camera is in
arbitrary motion at an arbitrary location near a Kerr black hole, for mapping
light sources to camera images via elliptical ray bundles. (iii) To describe
new insights, from DNGR, into gravitational lensing when the camera is near the
spinning black hole, rather than far away as in almost all prior studies. (iv)
To describe how the images of the black hole Gargantua and its accretion disk,
in the movie \emph{Interstellar}, were generated with DNGR. There are no new
astrophysical insights in this accretion-disk section of the paper, but disk
novices may find it pedagogically interesting, and movie buffs may find its
discussions of Interstellar interesting.Comment: 46 pages, 17 figure
- …