15,503 research outputs found
Signal reconstruction via operator guiding
Signal reconstruction from a sample using an orthogonal projector onto a
guiding subspace is theoretically well justified, but may be difficult to
practically implement. We propose more general guiding operators, which
increase signal components in the guiding subspace relative to those in a
complementary subspace, e.g., iterative low-pass edge-preserving filters for
super-resolution of images. Two examples of super-resolution illustrate our
technology: a no-flash RGB photo guided using a high resolution flash RGB
photo, and a depth image guided using a high resolution RGB photo.Comment: 5 pages, 8 figures. To appear in Proceedings of SampTA 2017: Sampling
Theory and Applications, 12th International Conference, July 3-7, 2017,
Tallinn, Estoni
Learning to Hallucinate Face Images via Component Generation and Enhancement
We propose a two-stage method for face hallucination. First, we generate
facial components of the input image using CNNs. These components represent the
basic facial structures. Second, we synthesize fine-grained facial structures
from high resolution training images. The details of these structures are
transferred into facial components for enhancement. Therefore, we generate
facial components to approximate ground truth global appearance in the first
stage and enhance them through recovering details in the second stage. The
experiments demonstrate that our method performs favorably against
state-of-the-art methodsComment: IJCAI 2017. Project page:
http://www.cs.cityu.edu.hk/~yibisong/ijcai17_sr/index.htm
A Deep Primal-Dual Network for Guided Depth Super-Resolution
In this paper we present a novel method to increase the spatial resolution of
depth images. We combine a deep fully convolutional network with a non-local
variational method in a deep primal-dual network. The joint network computes a
noise-free, high-resolution estimate from a noisy, low-resolution input depth
map. Additionally, a high-resolution intensity image is used to guide the
reconstruction in the network. By unrolling the optimization steps of a
first-order primal-dual algorithm and formulating it as a network, we can train
our joint method end-to-end. This not only enables us to learn the weights of
the fully convolutional network, but also to optimize all parameters of the
variational method and its optimization procedure. The training of such a deep
network requires a large dataset for supervision. Therefore, we generate
high-quality depth maps and corresponding color images with a physically based
renderer. In an exhaustive evaluation we show that our method outperforms the
state-of-the-art on multiple benchmarks.Comment: BMVC 201
- …