13 research outputs found
Learned Multi-View Texture Super-Resolution
We present a super-resolution method capable of creating a high-resolution
texture map for a virtual 3D object from a set of lower-resolution images of
that object. Our architecture unifies the concepts of (i) multi-view
super-resolution based on the redundancy of overlapping views and (ii)
single-view super-resolution based on a learned prior of high-resolution (HR)
image structure. The principle of multi-view super-resolution is to invert the
image formation process and recover the latent HR texture from multiple
lower-resolution projections. We map that inverse problem into a block of
suitably designed neural network layers, and combine it with a standard
encoder-decoder network for learned single-image super-resolution. Wiring the
image formation model into the network avoids having to learn perspective
mapping from textures to images, and elegantly handles a varying number of
input views. Experiments demonstrate that the combination of multi-view
observations and learned prior yields improved texture maps.Comment: 11 pages, 5 figures, 2019 International Conference on 3D Vision (3DV
Leveraging 2D data to learn textured 3D mesh generation
Numerous methods have been proposed for probabilistic generative modelling of
3D objects. However, none of these is able to produce textured objects, which
renders them of limited use for practical tasks. In this work, we present the
first generative model of textured 3D meshes. Training such a model would
traditionally require a large dataset of textured meshes, but unfortunately,
existing datasets of meshes lack detailed textures. We instead propose a new
training methodology that allows learning from collections of 2D images without
any 3D information. To do so, we train our model to explain a distribution of
images by modelling each image as a 3D foreground object placed in front of a
2D background. Thus, it learns to generate meshes that when rendered, produce
images similar to those in its training set.
A well-known problem when generating meshes with deep networks is the
emergence of self-intersections, which are problematic for many use-cases. As a
second contribution we therefore introduce a new generation process for 3D
meshes that guarantees no self-intersections arise, based on the physical
intuition that faces should push one another out of the way as they move.
We conduct extensive experiments on our approach, reporting quantitative and
qualitative results on both synthetic data and natural images. These show our
method successfully learns to generate plausible and diverse textured 3D
samples for five challenging object classes