9,758 research outputs found
Bilateral back-projection for single image super resolution
In this paper, a novel algorithm for single image super resolution is proposed. Back-projection [1] can minimize the reconstruction error with an efficient iterative procedure. Although it can produce visually appealing result, this method suffers from the chessboard effect and ringing effect, especially along strong edges. The underlining reason is that there is no edge guidance in the error correction process. Bilateral filtering can achieve edge-preserving image smoothing by adding the extra information from the feature domain. The basic idea is to do the smoothing on the pixels which are nearby both in space domain and in feature domain. The proposed bilateral back-projection algorithm strives to integrate the bilateral filtering into the back-projection method. In our approach, the back-projection process can be guided by the edge information to avoid across-edge smoothing, thus the chessboard effect and ringing effect along image edges are removed. Promising results can be obtained by the proposed bilateral back-projection method efficiently. 1
Sparsity Invariant CNNs
In this paper, we consider convolutional neural networks operating on sparse
inputs with an application to depth upsampling from sparse laser scan data.
First, we show that traditional convolutional networks perform poorly when
applied to sparse data even when the location of missing data is provided to
the network. To overcome this problem, we propose a simple yet effective sparse
convolution layer which explicitly considers the location of missing data
during the convolution operation. We demonstrate the benefits of the proposed
network architecture in synthetic and real experiments with respect to various
baseline approaches. Compared to dense baselines, the proposed sparse
convolution network generalizes well to novel datasets and is invariant to the
level of sparsity in the data. For our evaluation, we derive a novel dataset
from the KITTI benchmark, comprising 93k depth annotated RGB images. Our
dataset allows for training and evaluating depth upsampling and depth
prediction techniques in challenging real-world settings and will be made
available upon publication
A Deep Primal-Dual Network for Guided Depth Super-Resolution
In this paper we present a novel method to increase the spatial resolution of
depth images. We combine a deep fully convolutional network with a non-local
variational method in a deep primal-dual network. The joint network computes a
noise-free, high-resolution estimate from a noisy, low-resolution input depth
map. Additionally, a high-resolution intensity image is used to guide the
reconstruction in the network. By unrolling the optimization steps of a
first-order primal-dual algorithm and formulating it as a network, we can train
our joint method end-to-end. This not only enables us to learn the weights of
the fully convolutional network, but also to optimize all parameters of the
variational method and its optimization procedure. The training of such a deep
network requires a large dataset for supervision. Therefore, we generate
high-quality depth maps and corresponding color images with a physically based
renderer. In an exhaustive evaluation we show that our method outperforms the
state-of-the-art on multiple benchmarks.Comment: BMVC 201
- …