4,374 research outputs found
Single-shot layered reflectance separation using a polarized light field camera
We present a novel computational photography technique for single shot separation of diffuse/specular reflectance as well as novel angular domain separation of layered reflectance. Our solution consists of a two-way polarized light field (TPLF) camera which simultaneously captures two orthogonal states of polarization. A single photograph of a subject acquired with the TPLF camera under polarized illumination then enables standard separation of diffuse (depolarizing) and polarization preserving specular reflectance using light field sampling. We further demonstrate that the acquired data also enables novel angular separation of layered reflectance including separation of specular reflectance and single scattering in the polarization preserving component, and separation of shallow scattering from deep scattering in the depolarizing component. We apply our approach for efficient acquisition of facial reflectance including diffuse and specular normal maps, and novel separation of photometric normals into layered reflectance normals for layered facial renderings. We demonstrate our proposed single shot layered reflectance separation to be comparable to an existing multi-shot technique that relies on structured lighting while achieving separation results under a variety of illumination conditions
Reflectance Adaptive Filtering Improves Intrinsic Image Estimation
Separating an image into reflectance and shading layers poses a challenge for
learning approaches because no large corpus of precise and realistic ground
truth decompositions exists. The Intrinsic Images in the Wild~(IIW) dataset
provides a sparse set of relative human reflectance judgments, which serves as
a standard benchmark for intrinsic images. A number of methods use IIW to learn
statistical dependencies between the images and their reflectance layer.
Although learning plays an important role for high performance, we show that a
standard signal processing technique achieves performance on par with current
state-of-the-art. We propose a loss function for CNN learning of dense
reflectance predictions. Our results show a simple pixel-wise decision, without
any context or prior knowledge, is sufficient to provide a strong baseline on
IIW. This sets a competitive baseline which only two other approaches surpass.
We then develop a joint bilateral filtering method that implements strong prior
knowledge about reflectance constancy. This filtering operation can be applied
to any intrinsic image algorithm and we improve several previous results
achieving a new state-of-the-art on IIW. Our findings suggest that the effect
of learning-based approaches may have been over-estimated so far. Explicit
prior knowledge is still at least as important to obtain high performance in
intrinsic image decompositions.Comment: CVPR 201
Linear Efficient Antialiased Displacement and Reflectance Mapping
International audienceWe present Linear Efficient Antialiased Displacement and Reflectance (LEADR) mapping, a reflectance filtering technique for displacement mapped surfaces. Similarly to LEAN mapping, it employs two mipmapped texture maps, which store the first two moments of the displacement gradients. During rendering, the projection of this data over a pixel is used to compute a noncentered anisotropic Beckmann distribution using only simple, linear filtering operations. The distribution is then injected in a new, physically based, rough surface microfacet BRDF model, that includes masking and shadowing effects for both diffuse and specular reflection under directional, point, and environment lighting. Furthermore, our method is compatible with animation and deformation, making it extremely general and flexible. Combined with an adaptive meshing scheme, LEADR mapping provides the very first seamless and hardware-accelerated multi-resolution representation for surfaces. In order to demonstrate its effectiveness, we render highly detailed production models in real time on a commodity GPU, with quality matching supersampled ground-truth images
Recommended from our members
Rendering Deformable Surface Reflectance Fields
Animation of photorealistic computer graphics models is an important goal for many applications. Image-based modeling has emerged as a promising approach to capture and visualize real-world objects. Animating image-based models, however, is still a largely unsolved problem. In this paper, we extend a popular image-based representation called surface reflectance field to animate and render deformable real-world objects under arbitrary illumination. Deforming the surface reflectance field is achieved by modifying the underlying impostor geometry. We augment the impostor by a local parameterization that allows the correct evaluation of acquired reflectance images, preserving the original light model on the deformed surface. We present a deferred shading scheme to handle the increased amount of data involved in shading the deformable surface reflectance field. We show animations of various objects that were acquired with 3D photography.Engineering and Applied Science
- …