597 research outputs found
Photometric Depth Super-Resolution
This study explores the use of photometric techniques (shape-from-shading and
uncalibrated photometric stereo) for upsampling the low-resolution depth map
from an RGB-D sensor to the higher resolution of the companion RGB image. A
single-shot variational approach is first put forward, which is effective as
long as the target's reflectance is piecewise-constant. It is then shown that
this dependency upon a specific reflectance model can be relaxed by focusing on
a specific class of objects (e.g., faces), and delegate reflectance estimation
to a deep neural network. A multi-shot strategy based on randomly varying
lighting conditions is eventually discussed. It requires no training or prior
on the reflectance, yet this comes at the price of a dedicated acquisition
setup. Both quantitative and qualitative evaluations illustrate the
effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence
(T-PAMI), 2019. First three authors contribute equall
DANI-Net: Uncalibrated Photometric Stereo by Differentiable Shadow Handling, Anisotropic Reflectance Modeling, and Neural Inverse Rendering
Uncalibrated photometric stereo (UPS) is challenging due to the inherent
ambiguity brought by the unknown light. Although the ambiguity is alleviated on
non-Lambertian objects, the problem is still difficult to solve for more
general objects with complex shapes introducing irregular shadows and general
materials with complex reflectance like anisotropic reflectance. To exploit
cues from shadow and reflectance to solve UPS and improve performance on
general materials, we propose DANI-Net, an inverse rendering framework with
differentiable shadow handling and anisotropic reflectance modeling. Unlike
most previous methods that use non-differentiable shadow maps and assume
isotropic material, our network benefits from cues of shadow and anisotropic
reflectance through two differentiable paths. Experiments on multiple
real-world datasets demonstrate our superior and robust performance.Comment: Accepted by CVPR 202
OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects
We introduce OpenIllumination, a real-world dataset containing over 108K
images of 64 objects with diverse materials, captured under 72 camera views and
a large number of different illuminations. For each image in the dataset, we
provide accurate camera parameters, illumination ground truth, and foreground
segmentation masks. Our dataset enables the quantitative evaluation of most
inverse rendering and material decomposition methods for real objects. We
examine several state-of-the-art inverse rendering methods on our dataset and
compare their performances. The dataset and code can be found on the project
page: https://oppo-us-research.github.io/OpenIllumination
A Neural Height-Map Approach for the Binocular Photometric Stereo Problem
In this work we propose a novel, highly practical, binocular photometric
stereo (PS) framework, which has same acquisition speed as single view PS,
however significantly improves the quality of the estimated geometry.
As in recent neural multi-view shape estimation frameworks such as NeRF,
SIREN and inverse graphics approaches to multi-view photometric stereo (e.g.
PS-NeRF) we formulate shape estimation task as learning of a differentiable
surface and texture representation by minimising surface normal discrepancy for
normals estimated from multiple varying light images for two views as well as
discrepancy between rendered surface intensity and observed images. Our method
differs from typical multi-view shape estimation approaches in two key ways.
First, our surface is represented not as a volume but as a neural heightmap
where heights of points on a surface are computed by a deep neural network.
Second, instead of predicting an average intensity as PS-NeRF or introducing
lambertian material assumptions as Guo et al., we use a learnt BRDF and perform
near-field per point intensity rendering.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV
dataset adapted to binocular stereo setup as well as a new binocular
photometric stereo dataset - LUCES-ST.Comment: WACV 202
Towards Scalable Multi-View Reconstruction of Geometry and Materials
In this paper, we propose a novel method for joint recovery of camera pose,
object geometry and spatially-varying Bidirectional Reflectance Distribution
Function (svBRDF) of 3D scenes that exceed object-scale and hence cannot be
captured with stationary light stages. The input are high-resolution RGB-D
images captured by a mobile, hand-held capture system with point lights for
active illumination. Compared to previous works that jointly estimate geometry
and materials from a hand-held scanner, we formulate this problem using a
single objective function that can be minimized using off-the-shelf
gradient-based solvers. To facilitate scalability to large numbers of
observation views and optimization variables, we introduce a distributed
optimization algorithm that reconstructs 2.5D keyframe-based representations of
the scene. A novel multi-view consistency regularizer effectively synchronizes
neighboring keyframes such that the local optimization results allow for
seamless integration into a globally consistent 3D model. We provide a study on
the importance of each component in our formulation and show that our method
compares favorably to baselines. We further demonstrate that our method
accurately reconstructs various objects and materials and allows for expansion
to spatially larger scenes. We believe that this work represents a significant
step towards making geometry and material estimation from hand-held scanners
scalable
- …