4 research outputs found

    Learning to Reconstruct Shapes from Unseen Classes

    Full text link
    From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes. Here we present an algorithm, Generalizable Reconstruction (GenRe), designed to capture more generic, class-agnostic shape priors. We achieve this with an inference network and training procedure that combine 2.5D representations of visible surfaces (depth and silhouette), spherical shape representations of both visible and non-visible surfaces, and 3D voxel-based representations, in a principled manner that exploits the causal structure of how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training.Comment: NeurIPS 2018 (Oral). The first two authors contributed equally to this paper. Project page: http://genre.csail.mit.edu

    Wearable Structured Light System in Non-Rigid Configuration

    Get PDF
    Traditionally, structured light methods have been studied in rigid configurations. In these configurations the position and orientation between the light emitter and the camera are fixed and known beforehand. In this paper we break with this rigidness and present a new structured light system in non-rigid configuration. This system is composed by a wearable standard perspective camera and a simple laser emitter. Our non-rigid configuration permits free motion of the light emitter with respect to the camera. The point-based pattern emitted by the laser permits us to easily establish correspondences between the image from the camera and a virtual one generated from the light emitter. Using these correspondences, our method computes rotation and translation up to scale of the planes of the scene where the point pattern is projected and reconstructs them. This constitutes a very useful tool for navigation applications in indoor environments, which are mainly composed of planar surfaces

    One-shot scanning using de bruijn spaced grids.

    Get PDF
    Abstract In this paper we present a new "one-shot" method to reconstruct the shape of dynamic 3D objects and scenes based on active illumination. In common with other related prior-art methods, a static grid pattern is projected onto the scene, a video sequence of the illuminated scene is captured, a shape estimate is produced independently for each video frame, and the one-shot property is realized at the expense of space resolution. The main challenge in grid-based one-shot methods is to engineer the pattern and algorithms so that the correspondence between pattern grid points and their images can be established very fast and without uncertainty. We present an efficient one-shot method which exploits simple geometric constraints to solve the correspondence problem. We also introduce De Bruijn spaced grids, a novel grid pattern, and show with strong empirical data that the resulting scheme is much more robust compared to those based on uniform spaced grids
    corecore