122 research outputs found
Computational Schlieren Photography with Light Field Probes
We introduce a new approach to capturing refraction in transparent media, which we call light field background oriented Schlieren photography. By optically coding the locations and directions of light rays emerging from a light field probe, we can capture changes of the refractive index field between the probe and a camera or an observer. Our prototype capture setup consists of inexpensive off-the-shelf hardware, including inkjet-printed transparencies, lenslet arrays, and a conventional camera. By carefully encoding the color and intensity variations of 4D light field probes, we show how to code both spatial and angular information of refractive phenomena. Such coding schemes are demonstrated to allow for a new, single image approach to reconstructing transparent surfaces, such as thin solids or surfaces of fluids. The captured visual information is used to reconstruct refractive surface normals and a sparse set of control points independently from a single photograph.Natural Sciences and Engineering Research Council of CanadaAlfred P. Sloan FoundationUnited States. Defense Advanced Research Projects Agency. Young Faculty Awar
DeepVoxels: Learning Persistent 3D Feature Embeddings
In this work, we address the lack of 3D understanding of generative neural
networks by introducing a persistent 3D feature embedding for view synthesis.
To this end, we propose DeepVoxels, a learned representation that encodes the
view-dependent appearance of a 3D scene without having to explicitly model its
geometry. At its core, our approach is based on a Cartesian 3D grid of
persistent embedded features that learn to make use of the underlying 3D scene
structure. Our approach combines insights from 3D geometric computer vision
with recent advances in learning image-to-image mappings based on adversarial
loss functions. DeepVoxels is supervised, without requiring a 3D reconstruction
of the scene, using a 2D re-rendering loss and enforces perspective and
multi-view geometry in a principled manner. We apply our persistent 3D scene
representation to the problem of novel view synthesis demonstrating
high-quality results for a variety of challenging scenes.Comment: Video: https://www.youtube.com/watch?v=HM_WsZhoGXw Supplemental
material:
https://drive.google.com/file/d/1BnZRyNcVUty6-LxAstN83H79ktUq8Cjp/view?usp=sharing
Code: https://github.com/vsitzmann/deepvoxels Project page:
https://vsitzmann.github.io/deepvoxels
Hand-held Schlieren Photography with Light Field probes
We introduce a new approach to capturing refraction in transparent media, which we call Light Field Background Oriented Schlieren Photography (LFBOS). By optically coding the locations and directions of light rays emerging from a light field probe, we can capture changes of the refractive index field between the probe and a camera or an observer. Rather than using complicated and expensive optical setups as in traditional Schlieren photography we employ commodity hardware; our prototype consists of a camera and a lenslet array. By carefully encoding the color and intensity variations of a 4D probe instead of a diffuse 2D background, we avoid expensive computational processing of the captured data, which is necessary for Background Oriented Schlieren imaging (BOS). We analyze the benefits and limitations of our approach and discuss application scenarios.GRANT NC
- …