59,565 research outputs found
Image Formation Model Guided Deep Image Super-Resolution
We present a simple and effective image super-resolution algorithm that
imposes an image formation constraint on the deep neural networks via pixel
substitution. The proposed algorithm first uses a deep neural network to
estimate intermediate high-resolution images, blurs the intermediate images
using known blur kernels, and then substitutes values of the pixels at the
un-decimated positions with those of the corresponding pixels from the
low-resolution images. The output of the pixel substitution process strictly
satisfies the image formation model and is further refined by the same deep
neural network in a cascaded manner. The proposed framework is trained in an
end-to-end fashion and can work with existing feed-forward deep neural networks
for super-resolution and converges fast in practice. Extensive experimental
results show that the proposed algorithm performs favorably against
state-of-the-art methods.Comment: AAAI 2020. The training code and models are available at
https://github.com/jspan/PHYSICS S
Sparsity Invariant CNNs
In this paper, we consider convolutional neural networks operating on sparse
inputs with an application to depth upsampling from sparse laser scan data.
First, we show that traditional convolutional networks perform poorly when
applied to sparse data even when the location of missing data is provided to
the network. To overcome this problem, we propose a simple yet effective sparse
convolution layer which explicitly considers the location of missing data
during the convolution operation. We demonstrate the benefits of the proposed
network architecture in synthetic and real experiments with respect to various
baseline approaches. Compared to dense baselines, the proposed sparse
convolution network generalizes well to novel datasets and is invariant to the
level of sparsity in the data. For our evaluation, we derive a novel dataset
from the KITTI benchmark, comprising 93k depth annotated RGB images. Our
dataset allows for training and evaluating depth upsampling and depth
prediction techniques in challenging real-world settings and will be made
available upon publication
Photometric Depth Super-Resolution
This study explores the use of photometric techniques (shape-from-shading and
uncalibrated photometric stereo) for upsampling the low-resolution depth map
from an RGB-D sensor to the higher resolution of the companion RGB image. A
single-shot variational approach is first put forward, which is effective as
long as the target's reflectance is piecewise-constant. It is then shown that
this dependency upon a specific reflectance model can be relaxed by focusing on
a specific class of objects (e.g., faces), and delegate reflectance estimation
to a deep neural network. A multi-shot strategy based on randomly varying
lighting conditions is eventually discussed. It requires no training or prior
on the reflectance, yet this comes at the price of a dedicated acquisition
setup. Both quantitative and qualitative evaluations illustrate the
effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence
(T-PAMI), 2019. First three authors contribute equall
- …