2 research outputs found

    Single-view Object Shape Reconstruction Using Deep Shape Prior and Silhouette

    Full text link
    3D shape reconstruction from a single image is a highly ill-posed problem. Modern deep learning based systems try to solve this problem by learning an end-to-end mapping from image to shape via a deep network. In this paper, we aim to solve this problem via an online optimization framework inspired by traditional methods. Our framework employs a deep autoencoder to learn a set of latent codes of 3D object shapes, which are fitted by a probabilistic shape prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are jointly optimized guided by both image cues and deep shape prior without relying on an initialization from any trained deep nets. Surprisingly, our method achieves comparable performance to state-of-the-art methods even without training an end-to-end network, which shows a promising step in this direction

    Neural Object Descriptors for Multi-View Shape Reconstruction

    Full text link
    The choice of scene representation is crucial in both the shape inference algorithms it requires and the smart applications it enables. We present efficient and optimisable multi-class learned object descriptors together with a novel probabilistic and differential rendering engine, for principled full object shape inference from one or more RGB-D images. Our framework allows for accurate and robust 3D object reconstruction which enables multiple applications including robot grasping and placing, augmented reality, and the first object-level SLAM system capable of optimising object poses and shapes jointly with camera trajectory
    corecore