1 research outputs found
Warp and Learn: Novel Views Generation for Vehicles and Other Objects
In this work we introduce a new self-supervised, semi-parametric approach for
synthesizing novel views of a vehicle starting from a single monocular image.
Differently from parametric (i.e. entirely learning-based) methods, we show how
a-priori geometric knowledge about the object and the 3D world can be
successfully integrated into a deep learning based image generation framework.
As this geometric component is not learnt, we call our approach
semi-parametric.
In particular, we exploit man-made object symmetry and piece-wise planarity
to integrate rich a-priori visual information into the novel viewpoint
synthesis process. An Image Completion Network (ICN) is then trained to
generate a realistic image starting from this geometric guidance.
This careful blend between parametric and non-parametric components allows us
to i) operate in a real-world scenario, ii) preserve high-frequency visual
information such as textures, iii) handle truly arbitrary 3D roto-translations
of the input and iv) perform shape transfer to completely different 3D models.
Eventually, we show that our approach can be easily complemented with synthetic
data and extended to other rigid objects with completely different topology,
even in presence of concave structures and holes (e.g. chairs).
A comprehensive experimental analysis against state-of-the-art competitors
shows the efficacy of our method both from a quantitative and a perceptive
point of view.
Supplementary material, animated results, code and data are available at:
https://github.com/ndrplz/semiparametri