3,815 research outputs found
Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences
Machine learning based Single Image Intrinsic Decomposition (SIID) methods
decompose a captured scene into its albedo and shading images by using the
knowledge of a large set of known and realistic ground truth decompositions.
Collecting and annotating such a dataset is an approach that cannot scale to
sufficient variety and realism. We free ourselves from this limitation by
training on unannotated images.
Our method leverages the observation that two images of the same scene but
with different lighting provide useful information on their intrinsic
properties: by definition, albedo is invariant to lighting conditions, and
cross-combining the estimated albedo of a first image with the estimated
shading of a second one should lead back to the second one's input image. We
transcribe this relationship into a siamese training scheme for a deep
convolutional neural network that decomposes a single image into albedo and
shading. The siamese setting allows us to introduce a new loss function
including such cross-combinations, and to train solely on (time-lapse) images,
discarding the need for any ground truth annotations.
As a result, our method has the good properties of i) taking advantage of the
time-varying information of image sequences in the (pre-computed) training
step, ii) not requiring ground truth data to train on, and iii) being able to
decompose single images of unseen scenes at runtime. To demonstrate and
evaluate our work, we additionally propose a new rendered dataset containing
illumination-varying scenes and a set of quantitative metrics to evaluate SIID
algorithms. Despite its unsupervised nature, our results compete with state of
the art methods, including supervised and non data-driven methods.Comment: To appear in Pacific Graphics 201
Live User-guided Intrinsic Video For Static Scenes
We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance
CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition
Most of the traditional work on intrinsic image decomposition rely on
deriving priors about scene characteristics. On the other hand, recent research
use deep learning models as in-and-out black box and do not consider the
well-established, traditional image formation process as the basis of their
intrinsic learning process. As a consequence, although current deep learning
approaches show superior performance when considering quantitative benchmark
results, traditional approaches are still dominant in achieving high
qualitative results. In this paper, the aim is to exploit the best of the two
worlds. A method is proposed that (1) is empowered by deep learning
capabilities, (2) considers a physics-based reflection model to steer the
learning process, and (3) exploits the traditional approach to obtain intrinsic
images by exploiting reflectance and shading gradient information. The proposed
model is fast to compute and allows for the integration of all intrinsic
components. To train the new model, an object centered large-scale datasets
with intrinsic ground-truth images are created. The evaluation results
demonstrate that the new model outperforms existing methods. Visual inspection
shows that the image formation loss function augments color reproduction and
the use of gradient information produces sharper edges. Datasets, models and
higher resolution images are available at https://ivi.fnwi.uva.nl/cv/retinet.Comment: CVPR 201
Joint Learning of Intrinsic Images and Semantic Segmentation
Semantic segmentation of outdoor scenes is problematic when there are
variations in imaging conditions. It is known that albedo (reflectance) is
invariant to all kinds of illumination effects. Thus, using reflectance images
for semantic segmentation task can be favorable. Additionally, not only
segmentation may benefit from reflectance, but also segmentation may be useful
for reflectance computation. Therefore, in this paper, the tasks of semantic
segmentation and intrinsic image decomposition are considered as a combined
process by exploring their mutual relationship in a joint fashion. To that end,
we propose a supervised end-to-end CNN architecture to jointly learn intrinsic
image decomposition and semantic segmentation. We analyze the gains of
addressing those two problems jointly. Moreover, new cascade CNN architectures
for intrinsic-for-segmentation and segmentation-for-intrinsic are proposed as
single tasks. Furthermore, a dataset of 35K synthetic images of natural
environments is created with corresponding albedo and shading (intrinsics), as
well as semantic labels (segmentation) assigned to each object/scene. The
experiments show that joint learning of intrinsic image decomposition and
semantic segmentation is beneficial for both tasks for natural scenes. Dataset
and models are available at: https://ivi.fnwi.uva.nl/cv/intrinsegComment: ECCV 201
Single View Reconstruction for Human Face and Motion with Priors
Single view reconstruction is fundamentally an under-constrained problem. We aim to develop new approaches to model human face and motion with model priors that restrict the space of possible solutions. First, we develop a novel approach to recover the 3D shape from a single view image under challenging conditions, such as large variations in illumination and pose. The problem is addressed by employing the techniques of non-linear manifold embedding and alignment. Specifically, the local image models for each patch of facial images and the local surface models for each patch of 3D shape are learned using a non-linear dimensionality reduction technique, and the correspondences between these local models are then learned by a manifold alignment method. Local models successfully remove the dependency of large training databases for human face modeling. By combining the local shapes, the global shape of a face can be reconstructed directly from a single linear system of equations via least square.
Unfortunately, this learning-based approach cannot be successfully applied to the problem of human motion modeling due to the internal and external variations in single view video-based marker-less motion capture. Therefore, we introduce a new model-based approach for capturing human motion using a stream of depth images from a single depth sensor. While a depth sensor provides metric 3D information, using a single sensor, instead of a camera array, results in a view-dependent and incomplete measurement of object motion. We develop a novel two-stage template fitting algorithm that is invariant to subject size and view-point variations, and robust to occlusions. Starting from a known pose, our algorithm first estimates a body configuration through temporal registration, which is used to search the template motion database for a best match. The best match body configuration as well as its corresponding surface mesh model are deformed to fit the input depth map, filling in the part that is occluded from the input and compensating for differences in pose and body-size between the input image and the template. Our approach does not require any makers, user-interaction, or appearance-based tracking.
Experiments show that our approaches can achieve good modeling results for human face and motion, and are capable of dealing with variety of challenges in single view reconstruction, e.g., occlusion
Learning from Synthetic Humans
Estimating human pose, shape, and motion from images and videos are
fundamental challenges with many applications. Recent advances in 2D human pose
estimation use large amounts of manually-labeled training data for learning
convolutional neural networks (CNNs). Such data is time consuming to acquire
and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion
is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL
tasks): a new large-scale dataset with synthetically-generated but realistic
images of people rendered from 3D sequences of human motion capture data. We
generate more than 6 million frames together with ground truth pose, depth
maps, and segmentation masks. We show that CNNs trained on our synthetic
dataset allow for accurate human depth estimation and human part segmentation
in real RGB images. Our results and the new dataset open up new possibilities
for advancing person analysis using cheap and large-scale synthetic data.Comment: Appears in: 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017). 9 page
Physics-based Shading Reconstruction for Intrinsic Image Decomposition
We investigate the use of photometric invariance and deep learning to compute
intrinsic images (albedo and shading). We propose albedo and shading gradient
descriptors which are derived from physics-based models. Using the descriptors,
albedo transitions are masked out and an initial sparse shading map is
calculated directly from the corresponding RGB image gradients in a
learning-free unsupervised manner. Then, an optimization method is proposed to
reconstruct the full dense shading map. Finally, we integrate the generated
shading map into a novel deep learning framework to refine it and also to
predict corresponding albedo image to achieve intrinsic image decomposition. By
doing so, we are the first to directly address the texture and intensity
ambiguity problems of the shading estimations. Large scale experiments show
that our approach steered by physics-based invariant descriptors achieve
superior results on MIT Intrinsics, NIR-RGB Intrinsics, Multi-Illuminant
Intrinsic Images, Spectral Intrinsic Images, As Realistic As Possible, and
competitive results on Intrinsic Images in the Wild datasets while achieving
state-of-the-art shading estimations.Comment: Submitted to Computer Vision and Image Understanding (CVIU
- …