6,668 research outputs found
CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering
Intrinsic image decomposition is a challenging, long-standing computer vision
problem for which ground truth data is very difficult to acquire. We explore
the use of synthetic data for training CNN-based intrinsic image decomposition
models, then applying these learned models to real-world images. To that end,
we present \ICG, a new, large-scale dataset of physically-based rendered images
of scenes with full ground truth decompositions. The rendering process we use
is carefully designed to yield high-quality, realistic images, which we find to
be crucial for this problem domain. We also propose a new end-to-end training
method that learns better decompositions by leveraging \ICG, and optionally IIW
and SAW, two recent datasets of sparse annotations on real-world images.
Surprisingly, we find that a decomposition network trained solely on our
synthetic data outperforms the state-of-the-art on both IIW and SAW, and
performance improves even further when IIW and SAW data is added during
training. Our work demonstrates the suprising effectiveness of
carefully-rendered synthetic data for the intrinsic images task.Comment: Paper for 'CGIntrinsics: Better Intrinsic Image Decomposition through
Physically-Based Rendering' published in ECCV, 201
Joint Learning of Intrinsic Images and Semantic Segmentation
Semantic segmentation of outdoor scenes is problematic when there are
variations in imaging conditions. It is known that albedo (reflectance) is
invariant to all kinds of illumination effects. Thus, using reflectance images
for semantic segmentation task can be favorable. Additionally, not only
segmentation may benefit from reflectance, but also segmentation may be useful
for reflectance computation. Therefore, in this paper, the tasks of semantic
segmentation and intrinsic image decomposition are considered as a combined
process by exploring their mutual relationship in a joint fashion. To that end,
we propose a supervised end-to-end CNN architecture to jointly learn intrinsic
image decomposition and semantic segmentation. We analyze the gains of
addressing those two problems jointly. Moreover, new cascade CNN architectures
for intrinsic-for-segmentation and segmentation-for-intrinsic are proposed as
single tasks. Furthermore, a dataset of 35K synthetic images of natural
environments is created with corresponding albedo and shading (intrinsics), as
well as semantic labels (segmentation) assigned to each object/scene. The
experiments show that joint learning of intrinsic image decomposition and
semantic segmentation is beneficial for both tasks for natural scenes. Dataset
and models are available at: https://ivi.fnwi.uva.nl/cv/intrinsegComment: ECCV 201
Reflectance Adaptive Filtering Improves Intrinsic Image Estimation
Separating an image into reflectance and shading layers poses a challenge for
learning approaches because no large corpus of precise and realistic ground
truth decompositions exists. The Intrinsic Images in the Wild~(IIW) dataset
provides a sparse set of relative human reflectance judgments, which serves as
a standard benchmark for intrinsic images. A number of methods use IIW to learn
statistical dependencies between the images and their reflectance layer.
Although learning plays an important role for high performance, we show that a
standard signal processing technique achieves performance on par with current
state-of-the-art. We propose a loss function for CNN learning of dense
reflectance predictions. Our results show a simple pixel-wise decision, without
any context or prior knowledge, is sufficient to provide a strong baseline on
IIW. This sets a competitive baseline which only two other approaches surpass.
We then develop a joint bilateral filtering method that implements strong prior
knowledge about reflectance constancy. This filtering operation can be applied
to any intrinsic image algorithm and we improve several previous results
achieving a new state-of-the-art on IIW. Our findings suggest that the effect
of learning-based approaches may have been over-estimated so far. Explicit
prior knowledge is still at least as important to obtain high performance in
intrinsic image decompositions.Comment: CVPR 201
3D Reconstruction with Low Resolution, Small Baseline and High Radial Distortion Stereo Images
In this paper we analyze and compare approaches for 3D reconstruction from
low-resolution (250x250), high radial distortion stereo images, which are
acquired with small baseline (approximately 1mm). These images are acquired
with the system NanEye Stereo manufactured by CMOSIS/AWAIBA. These stereo
cameras have also small apertures, which means that high levels of illumination
are required. The goal was to develop an approach yielding accurate
reconstructions, with a low computational cost, i.e., avoiding non-linear
numerical optimization algorithms. In particular we focused on the analysis and
comparison of radial distortion models. To perform the analysis and comparison,
we defined a baseline method based on available software and methods, such as
the Bouguet toolbox [2] or the Computer Vision Toolbox from Matlab. The
approaches tested were based on the use of the polynomial model of radial
distortion, and on the application of the division model. The issue of the
center of distortion was also addressed within the framework of the application
of the division model. We concluded that the division model with a single
radial distortion parameter has limitations
3D Reconstruction with Low Resolution, Small Baseline and High Radial Distortion Stereo Images
In this paper we analyze and compare approaches for 3D reconstruction from
low-resolution (250x250), high radial distortion stereo images, which are
acquired with small baseline (approximately 1mm). These images are acquired
with the system NanEye Stereo manufactured by CMOSIS/AWAIBA. These stereo
cameras have also small apertures, which means that high levels of illumination
are required. The goal was to develop an approach yielding accurate
reconstructions, with a low computational cost, i.e., avoiding non-linear
numerical optimization algorithms. In particular we focused on the analysis and
comparison of radial distortion models. To perform the analysis and comparison,
we defined a baseline method based on available software and methods, such as
the Bouguet toolbox [2] or the Computer Vision Toolbox from Matlab. The
approaches tested were based on the use of the polynomial model of radial
distortion, and on the application of the division model. The issue of the
center of distortion was also addressed within the framework of the application
of the division model. We concluded that the division model with a single
radial distortion parameter has limitations
- …