670 research outputs found
A Morphable Face Albedo Model
In this paper, we bring together two divergent strands of research:
photometric face capture and statistical 3D face appearance modelling. We
propose a novel lightstage capture and processing pipeline for acquiring
ear-to-ear, truly intrinsic diffuse and specular albedo maps that fully factor
out the effects of illumination, camera and geometry. Using this pipeline, we
capture a dataset of 50 scans and combine them with the only existing publicly
available albedo dataset (3DRFE) of 23 scans. This allows us to build the first
morphable face albedo model. We believe this is the first statistical analysis
of the variability of facial specular albedo maps. This model can be used as a
plug in replacement for the texture model of the Basel Face Model (BFM) or
FLAME and we make the model publicly available. We ensure careful spectral
calibration such that our model is built in a linear sRGB space, suitable for
inverse rendering of images taken by typical cameras. We demonstrate our model
in a state of the art analysis-by-synthesis 3DMM fitting pipeline, are the
first to integrate specular map estimation and outperform the BFM in albedo
reconstruction.Comment: CVPR 202
Ear-to-ear Capture of Facial Intrinsics
We present a practical approach to capturing ear-to-ear face models
comprising both 3D meshes and intrinsic textures (i.e. diffuse and specular
albedo). Our approach is a hybrid of geometric and photometric methods and
requires no geometric calibration. Photometric measurements made in a
lightstage are used to estimate view dependent high resolution normal maps. We
overcome the problem of having a single photometric viewpoint by capturing in
multiple poses. We use uncalibrated multiview stereo to estimate a coarse base
mesh to which the photometric views are registered. We propose a novel approach
to robustly stitching surface normal and intrinsic texture data into a
seamless, complete and highly detailed face model. The resulting relightable
models provide photorealistic renderings in any view
Perceptually Meaningful Image Editing: Depth
We introduce the concept of perceptually meaningful image editing and present two techniques for manipulating the apparent depth of objects in an image. The user loads an image, selects an object and specifies whether the object should appear closer or further away. The system automatically determines target values for the object and/or background that achieve the desired depth change. These depth editing operations, based on techniques used by traditional artists, manipulate either the luminance or color temperature of different regions of the image. By performing blending in the gradient domain and reconstruction with a Poisson solver, the appearance of false edges is minimized. The results of a preliminary user study, designed to evaluate the effectiveness of these techniques, are also presented
Depth Synthesis and Local Warps for Plausible Image-based Navigation
International audienceModern camera calibration and multiview stereo techniques enable users to smoothly navigate between different views of a scene captured using standard cameras. The underlying automatic 3D reconstruction methods work well for buildings and regular structures but often fail on vegetation, vehicles and other complex geometry present in everyday urban scenes. Consequently, missing depth information makes image-based rendering (IBR) for such scenes very challenging. Our goal is to provide plausible free-viewpoint navigation for such datasets. To do this, we introduce a new IBR algorithm that is robust to missing or unreliable geometry, providing plausible novel views even in regions quite far from the input camera positions. We first oversegment the input images, creating superpixels of homogeneous color content which often tends to preserve depth discontinuities. We then introduce a depth-synthesis approach for poorly reconstructed regions based on a graph structure on the oversegmentation and appropriate traversal of the graph. The superpixels augmented with synthesized depth allow us to define a local shape-preserving warp which compensates for inaccurate depth. Our rendering algorithm blends the warped images, and generates plausible image-based novel views for our challenging target scenes. Our results demonstrate novel view synthesis in real time for multiple challenging scenes with significant depth complexity, providing a convincing immersive navigation experience
- …