18 research outputs found
Recovering refined surface normals for relighting clothing in dynamic scenes
In this paper we present a method to relight captured 3D video sequences of non-rigid, dynamic scenes, such as clothing of real actors, reconstructed from multiple view video. A view-dependent approach is introduced to refine an initial coarse surface reconstruction using shape-from-shading to estimate detailed surface normals. The prior surface approximation is used to constrain the simultaneous estimation of surface normals and scene illumination, under the assumption of Lambertian surface reflectance. This approach enables detailed surface normals of a moving non-rigid object to be estimated from a single image frame. Refined normal estimates from multiple views are integrated into a single surface normal map. This approach allows highly non-rigid surfaces, such as creases in clothing, to be relit whilst preserving the detailed dynamics observed in video
PS-FCN: A Flexible Learning Framework for Photometric Stereo
This paper addresses the problem of photometric stereo for non-Lambertian
surfaces. Existing approaches often adopt simplified reflectance models to make
the problem more tractable, but this greatly hinders their applications on
real-world objects. In this paper, we propose a deep fully convolutional
network, called PS-FCN, that takes an arbitrary number of images of a static
object captured under different light directions with a fixed camera as input,
and predicts a normal map of the object in a fast feed-forward pass. Unlike the
recently proposed learning based method, PS-FCN does not require a pre-defined
set of light directions during training and testing, and can handle multiple
images and light directions in an order-agnostic manner. Although we train
PS-FCN on synthetic data, it can generalize well on real datasets. We further
show that PS-FCN can be easily extended to handle the problem of uncalibrated
photometric stereo.Extensive experiments on public real datasets show that
PS-FCN outperforms existing approaches in calibrated photometric stereo, and
promising results are achieved in uncalibrated scenario, clearly demonstrating
its effectiveness.Comment: ECCV 2018: https://guanyingc.github.io/PS-FC
Learning Inter- and Intra-frame Representations for Non-Lambertian Photometric Stereo
In this paper, we build a two-stage Convolutional Neural Network (CNN)
architecture to construct inter- and intra-frame representations based on an
arbitrary number of images captured under different light directions,
performing accurate normal estimation of non-Lambertian objects. We
experimentally investigate numerous network design alternatives for identifying
the optimal scheme to deploy inter-frame and intra-frame feature extraction
modules for the photometric stereo problem. Moreover, we propose to utilize the
easily obtained object mask for eliminating adverse interference from invalid
background regions in intra-frame spatial convolutions, thus effectively
improve the accuracy of normal estimation for surfaces made of dark materials
or with cast shadows. Experimental results demonstrate that proposed masked
two-stage photometric stereo CNN model (MT-PS-CNN) performs favorably against
state-of-the-art photometric stereo techniques in terms of both accuracy and
efficiency. In addition, the proposed method is capable of predicting accurate
and rich surface normal details for non-Lambertian objects of complex geometry
and performs stably given inputs captured in both sparse and dense lighting
distributions.Comment: 9 pages,8 figure
Intrinsic Textures for Relightable Free-Viewpoint Video
This paper presents an approach to estimate the intrinsic texture properties (albedo, shading, normal) of scenes from multiple view acquisition under unknown illumination conditions. We introduce the concept of intrinsic textures, which are pixel-resolution surface textures representing the intrinsic appearance parameters of a scene. Unlike previous video relighting methods, the approach does not assume regions of uniform albedo, which makes it applicable to richly textured scenes. We show that intrinsic image methods can be used to refine an initial, low-frequency shading estimate based on a global lighting reconstruction from an original texture and coarse scene geometry in order to resolve the inherent global ambiguity in shading. The method is applied to relighting of free-viewpoint rendering from multiple view video capture. This demonstrates relighting with reproduction of fine surface detail. Quantitative evaluation on synthetic models with textured appearance shows accurate estimation of intrinsic surface reflectance properties. © 2014 Springer International Publishing
Dense Corresspondence Estimation for Image Interpolation
We evaluate the current state-of-the-art in dense correspondence estimation for the use in multi-image interpolation algorithms.
The evaluation is carried out on three real-world scenes and one synthetic scene, each featuring varying challenges for dense correspondence estimation. The primary focus of our study is on the perceptual quality of the interpolation sequences created from the estimated flow fields. Perceptual plausibility is assessed by means of a psychophysical userstudy. Our results show that current state-of-the-art in dense correspondence estimation does not produce visually plausible interpolations.In diesem Bericht evaluieren wir den gegenwärtigen Stand der Technik in dichter Korrespondenzschätzung hinsichtlich der Eignung für die Nutzung in Algorithmen zur Zwischenbildsynthese. Die Auswertung erfolgt auf drei realen und einer synthetischen Szene mit variierenden Herausforderungen für Algorithmen zur Korrespondenzschätzung. Mittels einer perzeptuellen Benutzerstudie werten wir die wahrgenommene Qualität der interpolierten Bildsequenzen aus. Unsere Ergebnisse zeigen dass der gegenwärtige Stand der Technik in dichter Korrespondezschätzung nicht für die Zwischenbildsynthese geeignet ist