23,981 research outputs found
Learned Multi-View Texture Super-Resolution
We present a super-resolution method capable of creating a high-resolution
texture map for a virtual 3D object from a set of lower-resolution images of
that object. Our architecture unifies the concepts of (i) multi-view
super-resolution based on the redundancy of overlapping views and (ii)
single-view super-resolution based on a learned prior of high-resolution (HR)
image structure. The principle of multi-view super-resolution is to invert the
image formation process and recover the latent HR texture from multiple
lower-resolution projections. We map that inverse problem into a block of
suitably designed neural network layers, and combine it with a standard
encoder-decoder network for learned single-image super-resolution. Wiring the
image formation model into the network avoids having to learn perspective
mapping from textures to images, and elegantly handles a varying number of
input views. Experiments demonstrate that the combination of multi-view
observations and learned prior yields improved texture maps.Comment: 11 pages, 5 figures, 2019 International Conference on 3D Vision (3DV
Deep Markov Random Field for Image Modeling
Markov Random Fields (MRFs), a formulation widely used in generative image
modeling, have long been plagued by the lack of expressive power. This issue is
primarily due to the fact that conventional MRFs formulations tend to use
simplistic factors to capture local patterns. In this paper, we move beyond
such limitations, and propose a novel MRF model that uses fully-connected
neurons to express the complex interactions among pixels. Through theoretical
analysis, we reveal an inherent connection between this model and recurrent
neural networks, and thereon derive an approximated feed-forward network that
couples multiple RNNs along opposite directions. This formulation combines the
expressive power of deep neural networks and the cyclic dependency structure of
MRF in a unified model, bringing the modeling capability to a new level. The
feed-forward approximation also allows it to be efficiently learned from data.
Experimental results on a variety of low-level vision tasks show notable
improvement over state-of-the-arts.Comment: Accepted at ECCV 201
A Joint Intensity and Depth Co-Sparse Analysis Model for Depth Map Super-Resolution
High-resolution depth maps can be inferred from low-resolution depth
measurements and an additional high-resolution intensity image of the same
scene. To that end, we introduce a bimodal co-sparse analysis model, which is
able to capture the interdependency of registered intensity and depth
information. This model is based on the assumption that the co-supports of
corresponding bimodal image structures are aligned when computed by a suitable
pair of analysis operators. No analytic form of such operators exist and we
propose a method for learning them from a set of registered training signals.
This learning process is done offline and returns a bimodal analysis operator
that is universally applicable to natural scenes. We use this to exploit the
bimodal co-sparse analysis model as a prior for solving inverse problems, which
leads to an efficient algorithm for depth map super-resolution.Comment: 13 pages, 4 figure
- …