1,672 research outputs found
Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences
Machine learning based Single Image Intrinsic Decomposition (SIID) methods
decompose a captured scene into its albedo and shading images by using the
knowledge of a large set of known and realistic ground truth decompositions.
Collecting and annotating such a dataset is an approach that cannot scale to
sufficient variety and realism. We free ourselves from this limitation by
training on unannotated images.
Our method leverages the observation that two images of the same scene but
with different lighting provide useful information on their intrinsic
properties: by definition, albedo is invariant to lighting conditions, and
cross-combining the estimated albedo of a first image with the estimated
shading of a second one should lead back to the second one's input image. We
transcribe this relationship into a siamese training scheme for a deep
convolutional neural network that decomposes a single image into albedo and
shading. The siamese setting allows us to introduce a new loss function
including such cross-combinations, and to train solely on (time-lapse) images,
discarding the need for any ground truth annotations.
As a result, our method has the good properties of i) taking advantage of the
time-varying information of image sequences in the (pre-computed) training
step, ii) not requiring ground truth data to train on, and iii) being able to
decompose single images of unseen scenes at runtime. To demonstrate and
evaluate our work, we additionally propose a new rendered dataset containing
illumination-varying scenes and a set of quantitative metrics to evaluate SIID
algorithms. Despite its unsupervised nature, our results compete with state of
the art methods, including supervised and non data-driven methods.Comment: To appear in Pacific Graphics 201
Physics-based Shading Reconstruction for Intrinsic Image Decomposition
We investigate the use of photometric invariance and deep learning to compute
intrinsic images (albedo and shading). We propose albedo and shading gradient
descriptors which are derived from physics-based models. Using the descriptors,
albedo transitions are masked out and an initial sparse shading map is
calculated directly from the corresponding RGB image gradients in a
learning-free unsupervised manner. Then, an optimization method is proposed to
reconstruct the full dense shading map. Finally, we integrate the generated
shading map into a novel deep learning framework to refine it and also to
predict corresponding albedo image to achieve intrinsic image decomposition. By
doing so, we are the first to directly address the texture and intensity
ambiguity problems of the shading estimations. Large scale experiments show
that our approach steered by physics-based invariant descriptors achieve
superior results on MIT Intrinsics, NIR-RGB Intrinsics, Multi-Illuminant
Intrinsic Images, Spectral Intrinsic Images, As Realistic As Possible, and
competitive results on Intrinsic Images in the Wild datasets while achieving
state-of-the-art shading estimations.Comment: Submitted to Computer Vision and Image Understanding (CVIU
Estimating Reflectance Layer from A Single Image: Integrating Reflectance Guidance and Shadow/Specular Aware Learning
Estimating reflectance layer from a single image is a challenging task. It
becomes more challenging when the input image contains shadows or specular
highlights, which often render an inaccurate estimate of the reflectance layer.
Therefore, we propose a two-stage learning method, including reflectance
guidance and a Shadow/Specular-Aware (S-Aware) network to tackle the problem.
In the first stage, an initial reflectance layer free from shadows and
specularities is obtained with the constraint of novel losses that are guided
by prior-based shadow-free and specular-free images. To further enforce the
reflectance layer to be independent from shadows and specularities in the
second-stage refinement, we introduce an S-Aware network that distinguishes the
reflectance image from the input image. Our network employs a classifier to
categorize shadow/shadow-free, specular/specular-free classes, enabling the
activation features to function as attention maps that focus on shadow/specular
regions. Our quantitative and qualitative evaluations show that our method
outperforms the state-of-the-art methods in the reflectance layer estimation
that is free from shadows and specularities.Comment: Accepted to AAAI202
Object-based Illumination Estimation with Rendering-aware Neural Networks
We present a scheme for fast environment light estimation from the RGBD
appearance of individual objects and their local image areas. Conventional
inverse rendering is too computationally demanding for real-time applications,
and the performance of purely learning-based techniques may be limited by the
meager input data available from individual objects. To address these issues,
we propose an approach that takes advantage of physical principles from inverse
rendering to constrain the solution, while also utilizing neural networks to
expedite the more computationally expensive portions of its processing, to
increase robustness to noisy input data as well as to improve temporal and
spatial stability. This results in a rendering-aware system that estimates the
local illumination distribution at an object with high accuracy and in real
time. With the estimated lighting, virtual objects can be rendered in AR
scenarios with shading that is consistent to the real scene, leading to
improved realism.Comment: ECCV 202
Rain Removal in Traffic Surveillance: Does it Matter?
Varying weather conditions, including rainfall and snowfall, are generally
regarded as a challenge for computer vision algorithms. One proposed solution
to the challenges induced by rain and snowfall is to artificially remove the
rain from images or video using rain removal algorithms. It is the promise of
these algorithms that the rain-removed image frames will improve the
performance of subsequent segmentation and tracking algorithms. However, rain
removal algorithms are typically evaluated on their ability to remove synthetic
rain on a small subset of images. Currently, their behavior is unknown on
real-world videos when integrated with a typical computer vision pipeline. In
this paper, we review the existing rain removal algorithms and propose a new
dataset that consists of 22 traffic surveillance sequences under a broad
variety of weather conditions that all include either rain or snowfall. We
propose a new evaluation protocol that evaluates the rain removal algorithms on
their ability to improve the performance of subsequent segmentation, instance
segmentation, and feature tracking algorithms under rain and snow. If
successful, the de-rained frames of a rain removal algorithm should improve
segmentation performance and increase the number of accurately tracked
features. The results show that a recent single-frame-based rain removal
algorithm increases the segmentation performance by 19.7% on our proposed
dataset, but it eventually decreases the feature tracking performance and
showed mixed results with recent instance segmentation methods. However, the
best video-based rain removal algorithm improves the feature tracking accuracy
by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System
- …