848 research outputs found

    Hierarchical Surface Prediction for 3D Object Reconstruction

    Full text link
    Recently, Convolutional Neural Networks have shown promising results for 3D geometry prediction. They can make predictions from very little input data such as a single color image. A major limitation of such approaches is that they only predict a coarse resolution voxel grid, which does not capture the surface of the objects well. We propose a general framework, called hierarchical surface prediction (HSP), which facilitates prediction of high resolution voxel grids. The main insight is that it is sufficient to predict high resolution voxels around the predicted surfaces. The exterior and interior of the objects can be represented with coarse resolution voxels. Our approach is not dependent on a specific input type. We show results for geometry prediction from color images, depth images and shape completion from partial voxel grids. Our analysis shows that our high resolution predictions are more accurate than low resolution predictions.Comment: 3DV 201

    Pose Induction for Novel Object Categories

    Full text link
    We address the task of predicting pose for objects of unannotated object categories from a small seed set of annotated object classes. We present a generalized classifier that can reliably induce pose given a single instance of a novel category. In case of availability of a large collection of novel instances, our approach then jointly reasons over all instances to improve the initial estimates. We empirically validate the various components of our algorithm and quantitatively show that our method produces reliable pose estimates. We also show qualitative results on a diverse set of classes and further demonstrate the applicability of our system for learning shape models of novel object classes

    Articulation-aware Canonical Surface Mapping

    Full text link
    We tackle the tasks of: 1) predicting a Canonical Surface Mapping (CSM) that indicates the mapping from 2D pixels to corresponding points on a canonical template shape, and 2) inferring the articulation and pose of the template corresponding to the input image. While previous approaches rely on keypoint supervision for learning, we present an approach that can learn without such annotations. Our key insight is that these tasks are geometrically related, and we can obtain supervisory signal via enforcing consistency among the predictions. We present results across a diverse set of animal object categories, showing that our method can learn articulation and CSM prediction from image collections using only foreground mask labels for training. We empirically show that allowing articulation helps learn more accurate CSM prediction, and that enforcing the consistency with predicted CSM is similarly critical for learning meaningful articulation.Comment: To appear at CVPR 2020, project page https://nileshkulkarni.github.io/acsm

    On the Optimality of a Class of LP-based Algorithms

    Full text link
    In this paper we will be concerned with a class of packing and covering problems which includes Vertex Cover and Independent Set. Typically, one can write an LP relaxation and then round the solution. In this paper, we explain why the simple LP-based rounding algorithm for the \\VC problem is optimal assuming the UGC. Complementing Raghavendra's result, our result generalizes to a class of strict, covering/packing type CSPs
    • …
    corecore